STATS ARTICLES 2006
Is AA Effective? Wall Street Journal vs Cochrane Collaboration
Maia Szalavitz, October 27, 2006
WSJ claims Cochrane report misled media
At stats, we’re huge fans of randomized controlled trials, which are the gold standard of evidence for working out whether drugs or therapies are effective or not. By comparing those who receive one treatment to those who receive another, or a placebo, randomized controlled trials allow scientists to determine whether improvements seen in patients are caused by the treatments under study.
But there is a problem with randomized controlled trials when it comes to evaluating certain psychological treatments; one that was missed in a recent report on the effectiveness of Alcoholics Anonymous (AA) as a treatment for alcoholism in the Wall Street Journal [subs. required].
The Journal reported that a Cochrane review, which concluded that the data did not support claims that AA is more effective than other approaches, lead to misleading reporting and headlines like “Review Sees No Advantage to 12-Step Programs.”
But that headline is, in fact, a fair characterization of the best data on AA. Contrary to the Journal’s claims that AA “hasn't been subjected to the gold standard of medical experiments, the double-blind randomized clinical trial,” there have been several trials that did just that.
They just didn’t happen to find an advantage for AA – a conclusion that the Journal finds impossible to accept in light of the fact that “untold multitudes of problem drinkers have become abstinent after attending A.A. meetings.” The paper points out that many studies have found a connection between AA attendance and reduced drinking; and even though this may just be a correlation rather than proof of cause and effect, it can, nonetheless, be “treated as powerful evidence.”
But what this shows is not that AA works in general but that AA works for those who choose to attend it. This is fine: there are many psychological treatments that work for those who prefer them, but are ineffective or even harmful to those who are randomly selected or even forced to participate in them.
Randomized controlled trials can’t tell you whether treatments like this are superior or inferior to each other because randomization eliminates the self-selection that is crucial to their success. Whether self-selection is the only thing that matters (whether, for example, people who would have gotten sober anyway, because they are highly motivated, are those who choose to attend AA, while those who are unmotivated do not) – may not be discerned without much more research.
But, as one of the studies covered in the Cochrane review found, if AA has any advantage over other treatments that do not carry its baggage of involving surrender to a higher power and asking his help with “character defects,” they haven’t yet been found in years of study.
The Journal was right to say that many professional treatments simply make people pay for what AA provides free of charge (such as advice on working its 12-steps), and that this is why researchers should determine whether such approaches are worth insurance and government funding. But the paper should have critiqued the dominance of the AA approach in professional treatment without evidence for its efficacy instead of critiquing the Cochrane review for relying on inadequate methodology.