Research
Sixty-six percent of randomized controlled trials in psychotherapy were conducted by researchers who were already committed to the treatment they were testing.1
The technical term is researcher allegiance. Defined in the research as “a belief in the superiority of a treatment” held by the person running the study. In practice, it means the researcher knew what they expected to find before the first participant arrived. That belief inflated reported effect sizes by 30%, on average. Of the 793 allegiant studies identified in one systematic review, only 3.2% disclosed the conflict. One study out of 793, exactly 0.2%, actually controlled for it.
This is documented across multiple systematic reviews. The researchers who found the problem are largely the same researchers whose field produced it.
Add publication bias on top of that. A study of US National Institutes of Health-funded psychotherapy trials found that 23.6% were never published. Adding the missing data back reduced reported effect sizes by 25%. After adjusting for publication bias, the difference between CBT and other psychotherapy approaches was no longer statistically significant.2
CBT’s research advantage over other approaches disappears when you account for which studies got buried.
That’s the foundation of the “evidence-based” label. A literature where two-thirds of studies were run by believers, roughly a quarter of the data was suppressed, and the stated advantage collapses when you correct for both.
The absence of research on other approaches gets read as absence of evidence of effectiveness. What it actually reflects is an absence of allegiant researchers with institutional funding.
The medical model requires a diagnosis before it can authorize anything. A DSM code justifies reimbursement, targets a drug, defines what a completed treatment looks like and determines how long it runs. Without a code, there is no billing. Without billing, there is no funding. Without funding, there is no research.
Strategic therapy works from the structure of the presenting problem: the pattern of interaction that maintains the symptom, the function the problem serves, what keeps it in place. The DSM has no category for that. Without a category, there is no reimbursement code. Without a reimbursement code, there is no institutional support. And an RCT requires a standardized, replicable protocol applied uniformly across participants, the opposite of a method designed to change based on what walks in the door.
An approach that resolves problems in weeks rather than years is also structurally incompatible with a system built around long-term care. Individual clinicians are downstream of incentive structures they didn’t design.
Research gets funded to validate treatments the funding system already rewards. Strategic therapy was never going to be that treatment, regardless of its clinical outcomes.
There’s a concept in psychotherapy research called the Dodo Bird Verdict. Named after the declaration in Alice in Wonderland that everyone wins and all shall have prizes. Multiple meta-analyses have found that different therapy approaches produce roughly equivalent outcomes, and that common factors (therapeutic alliance, therapist competence, client expectation) account for more outcome variance than the specific model being used.
The choice of approach, in other words, matters less than the research wars suggest. The outcomes attributed to any specific model reflect the practitioner’s skill as much as the model’s mechanics.
What actually distinguishes approaches is mechanism and speed. Strategic therapy adapts to the client and context, not to a diagnostic category. The first session follows a structured protocol, built for supervision and learning, to give therapists a consistent starting point. After that, the approach reads what is maintaining the problem in this person, in this situation, and responds to that. A practitioner working strategically in session three looks nothing like a practitioner working strategically in session three with someone else. That’s clinically precise. It’s also incompatible with RCT methodology, which requires the same protocol delivered uniformly across all participants, which is why it doesn’t accumulate the kind of research that manualized models do.
Whether it works is something practitioners answer case by case, over years of clinical experience. The research gap doesn’t answer it.
It just explains who chose not to look.
Dragioti E, et al. (2015). Disclosure of researcher allegiance in meta-analyses and randomised controlled trials of psychotherapy: a systematic appraisal. BMJ Open. PMC4458582 ↩︎
Driessen E, et al. (2015). Does publication bias inflate the apparent efficacy of psychological treatment for major depressive disorder? PLOS ONE. PubMed 26422604 ↩︎