Muddled psychiatric trial
SPARX – computerised cognitive behaviour therapy for adolescents
This week’s BMJ (here) reports a randomised trial to test whether SPARX, cognitive behaviour therapy (CBT) delivered by computer, was equal to “treatment as usual” for adolescents with depression. So long as “treatment as usual” includes conventional CBT, which probably works, this is a reasonable trial design.
The study was registered here with a planned sample size of 600 (300 per group) but only 187 got randomised. The discrepancy is ignored in the paper BMJ, but two different explanations are given, one on the trial registration site and another in the full electronic version of the published paper. Neither makes any sense.
Here’s an extract from the trial registration site justification.
“In April 2010, we carried out a planned blinded interim summary of the primary outcome measure, exploring the change in the CDRS-R, allowing for baseline levels and blocking by sites, based on 55 participants who had baseline and 2 month CDRS-R scores, produced a standard deviation of the change of 10.2. We established that if there were 110 participants completing the study this would therefore, provide sufficient power (80%) to detect a difference of 5.5 units on the CDRS-R change as statistically significant (two-tailed a=0.05). If the sample size were 130 this would decrease to 5.1 and if 200 participants were recruited this would further decrease to 4.1 units.”
And here’s an extract from the paper’s justification for the change.
“The children’s depression rating scale-revised contains several categories—for example, ‘depressive disorder might be confirmed in a comprehensive diagnostic evaluation’ and ‘depressive disorder is likely to be confirmed.’ The range of raw scores between these categories is 12-14. We argued that a difference of less than half a category was not likely to be clinically significant.”
Be honest. Recruitment was slow and you couldn’t be bothered.
Perhaps it doesn’t matter. The controls got “treatment as usual” which according to the protocol was going to “include psychotherapy, group counselling, individual counselling and psychoeducation”. No mention in the protocol of the controls getting CBT, but perhaps this would be picked up later. The plan was that a “detailed description of treatment as usual will be collected from each clinician at the end of the treatment.”
In the full version of the main paper all we are told is that 74 got “counselling”, 11 nothing at all, and two got drugs. That’s all. No mention of whether the counselling was CBT, or another type of psychobabble.
The explanation is buried in the discussion. They failed to collect “good data on adherence to treatment as usual”[…] and “clinicians often forgot to fill in our forms”. So why is the BMJ publishing it?
And the result? Both groups improved by the same amount. Here’s the primary outcome scores in each group.
The error bars are SEMs so the raw data, which they wisely keep hidden, is probably all over the place.
To their credit they do report one hard outcome – two children allocated to SPARX and one control child attempted suicide. On the basis of this the authors conclude that their SPARX programme is effective!
They’ve shown nothing of the sort. It’s no better and no worse than an undefined ragbag of “treatment as usual” in which none of the controls were documented to get conventional CBT, the only treatment with any supportive evidence.
And what about the 11 controls who got no treatment at all? On a post hoc analysis SPARX was significantly better than a “per protocol group of controls” who actually got some “treatment as usual”. The authors interpret this as encouraging for SPARX. But it implies that the 11 who got no treatment did best of all. It is equally plausible that “treatment as usual” was harmful.
Funding – New Zealand Ministry of Health
I wonder what the BMJ would say if a pharmaceutical company tried to pass off a new drug treatment for adolescent depression on the basis that it was no worse than controls given one of a range of undefined and unproven treatments, or no treatment at all? Would they be reassured when the company said they meant to ask what treatment the controls got, but the doctors “forgot to fill in our forms”?