How America became obsessed with BPA
The Milwaukee Journal Sentinel’s “Chemical Fallout” Crusade (cont'd)
One of the key factors in a well-designed toxicology study is that the chemical is tested at more than one dose level. Many of the independent studies that the Journal Sentinel “found” were dismissed in the risk assessments because they used only one dose level. (The NIEHS now says “single dose experiments are not acceptable” in present and future funding requests for research on BPA). As Dekant explains:
“When you just have a response at one dose, you always wonder if this is something really associated to the administration of a chemical or to whatever else. Moreover, studies with only one dose are useless for any assessment of health risks, since you can not determine a starting point for assessment, such as a NOAEL [No Observable Adverse Effect Level] or a benchmark dose. In the context of BPA, it is specifically of interest since, early on, vom Saal and others claimed non-linear dose-responses but never did any experiments to confirm. Now, they just say that all studies by industry are biased, which frees them of any need to demonstrate that they are correct.”
Because the charge of industry bias resonates so powerfully with journalists, it often serves to obscure both the fiscal realities and methodological rigor of scientific research. In other words, it frees journalists from the need to demonstrate which studies are correct and which are not.
One of the problems in toxicology is that studies raising questions about public health are often small in scale, which means that they can lack statistical power and the associations may well be random. The studies which then set out to either confirm or refute the finding need to be large in scale in order to produce statistical measures that suggest causality (the association then needs to be explained by a plausible biochemical process). But the bigger the study, the more expensive it’s going to be, which means that regulatory bodies tell industry to fund them. For example, in 2003, when the European Chemicals Bureau concluded that more research was needed due to uncertainty over possible low-dose effects of BPA, “a steering group, made of up experts from several EU member states, was subsequently set up to resolve this issue and proposed a 2-generation study on mice,” said an EFSA spokesman by email.
“This study (the second Tyl et al study, from 2006) was indeed funded by industry – as is general practice for chemicals which are to be marketed – but the steering group supervised the design of the study and the interpretation of its results, and was also able to comment during the conduct of the study. This study confirmed the NOAEL of 5mg/kg/bw per day, which is well above current exposure levels.”
To ensure rigor these studies are often conducted under a set of international standards called Good Laboratory Practice. “There is a quality assessment scale for toxicological studies regarding reliability,” said Dekant in an email.
“Studies performed under ‘good laboratory practice’ following study designs developed by scientific panels from the OECD (Organization for Economic Co-operation and Development) are usually considered of highest quality. Studies with explicit quality control such as those done in university labs but with good study design (several doses!) and adequate description are still reliable. Other studies with either limited reporting or flaws in the design (such as using only one dose or inadequate controls) are not considered reliable. This approach is widely used in assessment and is also a centerpiece in the EU REACH legislation. Due to the use of GLP, cover-ups are difficult, if not impossible.”
EFSA elaborated in a statement to STATS:
“Many non-GLP studies were considered for the EFSA opinion of 2006 but were not considered adequate for reasons explained in the opinion. Each of the non-GLP studies examined, many of which concern neurodevelopmental effects, suffers from limitations with regard to the study design or the reporting of the results, and taken together the results show no reproducible pattern. A study does not necessarily have to adhere to GLP in order to be taken into account in EFSA’s opinions, but does need to demonstrate adequate design and show reproducible results. No study linking exposure to BPA at levels lower than the existing NOAEL has yet to fulfill these criteria.”
The Journal Sentinel simply ignored these distinctions, and instead appeared to argue that GLP regulations – which are Federal regulations with the force of law – were being used to cover up the truth about BPA. As the paper reported on October 24, 2008,
“The guidelines, known as "Good Laboratory Practice," give greater credibility to studies that use more animals. National Institutes of Health guidelines limit the number of animals that can be tested by government scientists and those who work for many publicly funded institutions.
“The FDA's task force report on bisphenol A dismissed or gave lesser credence to hundreds of studies that showed the chemical caused harm. These studies were conducted by government and academic scientists, using state-of-the-art techniques and methods but did not have the stamp of Good Laboratory Practices.
Instead, the agency relied on a handful of industry-funded studies that had the stamp, even though they were flawed in other ways.”
The FDA’s approach and conclusions were based on the National Toxicology Program’s report on BPA, and follow those of the European Food Safety Authority. As noted, the studies that were rejected were not rejected without cause – and all the risk assessments explain why. In contrast, the Journal Sentinel never separated oral studies from injection studies, single dose from dose-response studies, or presented a statistical argument explaining why the exclusion of oral studies is wrong or why an industry-funded dose response study is inferior to an independent study using a single dose. Instead, it uses an equivalent of the ad hominem argument: if it’s not independent, it can’t be trusted.