When good intentions lead to bias in obesity research

The critical question of whether industry funding could be skewing research toward finding no link between soda and weight gain was raised by Vartanian et al. As they note:

“The issue of industry funding has been the focus of considerable scrutiny in several areas of medical research, particularly pharmaceutical studies. Our analyses revealed that the overall pattern of results differed significantly when studies funded and not funded by the food industry were compared… the average overall effect size for industry-funded studies was significantly smaller than the average effect size for nonfunded studies. This discrepancy was particularly striking in studies examining the effects of soft drink consumption on energy intake; effect sizes were moderate (r =0.23) for nonfunded studies and essentially nil (r =0.05) for funded studies.”

Industry funding as source of distortion in scientific research is a serious charge. But in this case, an unusual thing happened. University of Alabama biostatistician David Allison and research associate Mark Cope decided to analyze the data behind this claim.

“We requested, and Dr Vartanian graciously provided, his meta-analysis data file. Focusing on cross-sectional studies, because a large number had adiposity indicators as outcomes, we conducted publication bias (PB) detection analyses. PB causes the sample of studies published to not constitute a representative sample of the relevant studies that hypothetically could have been published… Typically, PB involves statistically significant studies having a higher likelihood of being published than non-statistically significant ones.”

We are more accustomed to the idea that the failure to publish results which have found no effect dogs pharmaceutical research for purely commercial reasons (If you have spent a billion dollars developing a drug that does nothing, why advertise that fact?) But a signal failing of the academic publishing model is that academic journals, like the mainstream media, want to tell their readers something new and exciting, not that nothing happened today, or study y failed to confirm the novel hypothesis earlier published to great acclaim in study x.

The problem is that negative findings – null results – are vital to scientific progress, because they help to close off false hypotheses and fruitless lines of enquiry. What if publication bias existed in obesity research, not out of a desire for profit on the part of the food companies, but out of a desire by public health researchers to pursue a higher “good”?

Allison and Cope (who acknowledge prior industry funding but note only that this study was supported in part by the National Institutes of Health) found “a clear inverse association between study precision and association magnitude.” In other words, the smaller (and therefore less precise and statistically powerful) the study, the greater the association it tended to find between soda and weight gain, and the more likely it appeared to be published – yet this was primarily true for the non-industry funded. On the other hand, industry funded studies showed a smaller association and had greater statistical precision.

Publication bias didn’t account for all the gap between industry and non-industry funded studies, so Allison and Cope do not rule out that there is some industry bias, but what that was, specifically, they could not say. other form of bias that leads to the difference in results

These findings run against the grain of conventional wisdom so much that it’s worth noting what the editors of the International Journal of Obesity  said in an accompanying commentary on Allison and Cope’s study:

“In what should be of major concern to the scientific community, an analysis of papers from industry-funded vs non-industry-funded studies showed that the industry funded papers actually were more accurate in reporting data, especially in reporting negative data, than were the other papers. Cope and Allison suggest that if data that were not in the direction desired and did not prove that sugar-sweetened beverages were ‘bad’, they tended not to be reported when studies were funded from non-industrial sources.”

But this wasn’t all Allison and Cope found. Two widely cited studies that had mixed findings on the link between weight and soda consumption were widely characterized by other researchers in a “misleadingly positive” way. In fact, only 12.7 and 33 percent, respectively, of citations to each of these two papers accurately described their overall results. Moreover, Allison and Cope note that even the press releases put out for the two papers failed to accurately convey the findings, giving both the spin of positive associations.

The authors concluded that “white hat bias” — a tendency to distort information to advance good causes — is compromising the reliability of research on obesity. “Interestingly,” they write, “although many papers point out what seem to be biases resulting from industry funding, we have identified here, perhaps for the first time, clear evidence that [White Hat Bias] can also exist in opposition to industry interests. Whether [White Hat Bias] is intentional or unintentional, and whether it stems from a bias toward anti-industry results, significant findings, feelings of righteous indignation, results that may justify public health actions, or yet other factors, is unclear.”

There is, however, an even greater and more surprising failing in the research on soda and weight gain than publication bias, and that is poor study quality.

 

sodatopNext - A “disappointing” lack of good research

 

 

 

facebooktwitterwordpress

 
 
 

Back to the homepage

STATS homepage

 
 
sugar