How America became obsessed with BPA
The Milwaukee Journal Sentinel’s “Chemical Fallout” Crusade
The Milwaukee Journal Sentinel devoted 40 articles adding up to over 30,000 words to BPA in 2008 alone (excluding separate articles written for the McClatchy News Service), winning four major awards for its coverage, the George Polk Award from Long Island University, The John B. Oakes Award for Distinguished Environmental Journalism from Columbia University’s Graduate School of Journalism, the Scripps Howard award, and a Sigma Delta Chi Award from the Society of Professional Journalists. As the Journal Sentinel noted:
“Journal Sentinel reporters Susanne Rust and Meg Kissinger have won a George Polk Award - one of journalism's highest honors - for doing work long neglected by federal regulators: They stepped in to alert the public of ill health effects caused by exposure to chemicals commonly found in American homes.”
In bestowing the Oakes Award, associate dean and director of the prize Arlene Morgan of Columbia University said:
"We received almost 100 entries in the newspaper and magazine divisions for this prize and concluded that the Journal Sentinel once again led the nation in performing a watchdog role that has a far-reaching implication on health issues"
As Mark Katches assistant managing editor for projects and investigations at the Journal Sentinel noted on December 21,
"These stories have changed a lot of people's habits and how they shop when they walk into any grocery store. That's a powerful thing."
The paper’s position at the end of this epic coverage is best summarized by David Haynes in a piece which appeared on December 17, 2008 criticizing the FDA’s decision not to take action on BPA:
“If it wasn't clear before Monday's disappointing letter from the U.S. Food and Drug Administration, it should be clear now: The FDA is punting. The agency sees no reason to ban, or even restrict, the use of the chemical bisphenol A.
In the letter to its advisory board, the FDA said it would review more studies and do more research on BPA. Until then, the chemical should be considered safe for anyone to use, even babies.
Is this the FDA or the CYA?
The FDA has dithered for years, embracing studies that found the ubiquitous chemical to be harmless - nearly all of which were paid for by the chemical industry - while ignoring a much larger body of independent research that linked BPA to an array of health problems, including diabetes and cancer.
BPA has been studied to death. There is no need for further research to reach the conclusion that it shouldn't be in kids' products.
BPA is found in thousands of consumer products, including hardened plastics such as water bottles, dental sealants and the epoxy liners used to protect canned food from bacteria. The chemical, which mimics the hormone estrogen, poses a risk of disrupting the human endocrine system, a risk that increases in young children, who do not excrete the chemical as rapidly as adults.”
In light of this trenchant criticism, it is not surprising that the Journal Sentinel’s stories in 2008 consistently portrayed BPA as a serious threat to health and argued that industry-funded studies were being given precedence by regulatory bodies in the U.S., despite a large but undefined number of “independent” studies claiming otherwise.
But in making the claim that there is “a much larger body of independent research” linking BPA to health problems the paper failed to explain the grounds for this numerical claim. In 2007, the paper conducted a “review” of “258 research papers and found that a large majority showed bisphenol A was harmful to lab animals. Those that didn't find harm overwhelmingly were paid for by the chemical industry.” The selection appears of these studies was determined by an internet search of a medical database. The paper declared its report to be “groundbreaking” (David Haynes, Nov 9, 2008).
But it would appear that no scientific criteria were applied to determining whether the studies were reliable or not; instead, the key criteria for judging was a positive finding for harm and whether the study was independent or industry funded. If a study found an effect and was independently funded it was significant; if a study didn’t find an effect and it was industry funded it was significant. In short, the “groundbreaking” study was unscientific even as it laid claim to determining what the science said about BPA. The scale of this error is revealed by the fact that the NIEHS has revised its criteria for funding academic research because many of the studies which it funded (and which the Journal Sentinel claims found harmful effects) were experimentally flawed.
Repeated requests to the Journal Sentinel (reporter Susanne Rust, deputy managing editor for projects Mark Katches) to explain why it didn’t appear to apply any statistical or methodological criteria to distinguish relevant from irrelevant research (such as the criteria recommended by the NTP statistics subpanel in 2001) went unanswered – as did requests for actual study citations (the paper, maddeningly, never provides citations for any study it refers to or characterizes).
This means that the Journal Sentinel gave its readers began its investigation with a false premise, an estimation and evaluation of the research findings on BPA that completely bypassed the scientific principles by which research is judged to be rigorous or not. In terms of brute numbers, there were, according to Willhite’s testimony to Congress in 2007:
“4,263 published scientific papers on developmental toxicity, acute and chronic toxicity, carcinogenesis, immunotoxicity, neurobehavioral toxicity, genotoxicity, biochemical toxicology, epidemiology studies, studies with workers exposed to bisphenol A and analyses of its concentrations in food, water and soil (summarized in Goodman et al., 2006; United Kingdom Health and Safety Executive, 2007, Willhite et al., 2008).”
Willhite, as lead author of NSF International’s paper, references 444 studies and papers in calculating a reference dose for BPA. But the critical question is not how many papers there are on a given position, but whether the design, route of exposure, dose-response relation, statistical power, and plausible mode of action give these studies the rigor and robustness to accept their conclusions and their utility in a human risk assessment. If the study’s findings are then replicated by other research, it becomes part of the evidence for a particular position. This is how the weight of evidence is adjudicated – not simply by counting up studies that found results pro or contra and accepting the higher number as the truth.
A well-designed study counts; a well-designed study that has been replicated counts even more: a poorly-designed study counts for little; a poorly-designed study that fails the test of replication counts for even less – or nothing at all.