STATS ARTICLES 2010
BPA and heart disease: Smoking gun or statistical smoke?
Trevor Butterworth, January 13, 2010
A new study claims an association between bisphenol a and heart disease. We've been here before.
A new study from researchers at the University of Exeter in England, and published in PLoS medicine (Melzer et al., 2010) claims to have replicated an association between urinary BPA levels and heart disease.
In 2008 a study by the same researchers in the Journal of the American Medical Association (Lang et al., 2008) claimed a similar association, along with correlations to diabetes and liver-enzyme abnormalities, using data from the US National Health and Nutrition Examination Survey (NHANES) for 2003-2004. The European Union's agency responsible for evaluating the risk from BPA concluded that a survey which sampled BPA in urine just once and then correlated it with diseases or conditions that can take years to develop could not be used to determine risk. The association could have been a chance finding or the result of confounding factors beyond the scope of the study.
The fundamental problem with the first study is replicated in the second: Cross sectional analyses cannot determine causality.
But this basic statistical principle was ignored by news organizations with a history of BPA alarmism: The Toronto Globe and Mail claimed that a "Low amount of BPA can increase cardiac risk by 45%, study finds."
WebMD claimed that "Researchers have confirmed that the bisphenol A (BPA) -- widely used in plastics including baby bottles and other drink containers -- increases the risk of cardiovascular disease."
Reuters, on the other hand, simply synopsized the press release, which, notably, makes no mention of an increased risk. In statistics,an association with a higher rate or incidence of a disease is not the same as saying there is an increased risk for that disease.
Most major news organizations ignored the study - and were probably right to do so, given how little it actually can tell us. But it is worth dwelling on why it tells us so little, given the broad public alarm created by environmental activists over a substance which regulatory bodies around the world have repeatedly evaluated as posing no risk to humans.
As a preamble, let's start with the fact that urinary levels of BPA are related to food intake and that higher urinary levels of BPA may well indicate higher food intake, and that in turn would be more likely associated with heart disease.
The nature of a cross sectional study means that it is a snapshot, in this case measuring exposure and a health outcome at the same time. The immediate obstacle is that it is impossible to know how these two relate to each other. It could be random. It could be that one factor is associated with another causal factor (such as ingesting more BPA through eating more food where it's the kind of food consumed that's the risk factor). The Exeter researchers know this - and if you read the actual study - acknowledge this limitation: "The cross sectional nature of the associations reported need to be treated with caution, as it is theoretically possible, for example, that those with cardiovascular disease change their diets in such a way as to increase BPA exposure."
They claim, however, that their association should be explored further because they confirmed an earlier statistical association between BPA and heart disease; the new study, they say, refutes the idea that the original finding was not a "statistical blip" and therefore the association is much more credible.
But this is far from a statistical slam dunk given that so many of the endpoints the researchers had previously measured ended up losing statistical significance. Was this due to a decline in BPA urinary concentrations from 2003/4 to 2005/6 - and more importantly, could the extent of the decline really be responsible for such differences? The Exeter study raises more questions than it answers, and this is one of the well-known problems of mining datasets for associations: such tentative associations seem to exist in a vacuum of plausibility.
Similarly, the lack of variance in BPA levels between individuals in each dataset also raises an interpretative problem: how could exposure differences so small between people result in health outcomes so different? The Exeter researchers speculate as to possible reasons for these outcomes, but the absence of biologically plausible explanations is glaring.
The Exeter researchers also appear to have pooled results from both datasets to confirm the earlier findings. But you can't statistically combine existing evidence with new evidence to support existing evidence. This requires an independent dataset.
What, too, of the possibility that the Exeter researchers replicated a bias in the original dataset? NHANES involves a lot of self reported data, so if there is bias in the original 2003/4 dataset, it will also show up in the new dataset for 2005/6.
These are not trivial objections. Taken in the context of the EU's dismissal of a previous study by the same researchers with a similar methodology, the message for the real world is not to panic. Enviromental activists have, unfortunately, seized on every finding, no matter how limited, to argue that the public is at grave danger from BPA, while ignoring the repeated dismissal of their alleged scientific evidence by regulatory agencies (Who wrote about the EU's rejection of the Exeter group's earlier study?) This one-sided presentation has been embraced, wittingly or unwittingly, by many journalists. Just remember, en masse, toxicologists are vastly more confident in regulatory agencies abilities to distinguish good and bad studies and accurately assess risk than the media and activist groups. The good news is that so few media outlets rushed to sound the alarm on this latest development.