2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003



bisphenol a

Science Suppressed:
How America became obsessed with BPA

 

 

Why was the route of exposure to BPA ignored? (cont'd)
What does that mean? Consider the following, the NTP statistics panel in 2001 warned that the litter and not the individual rodent pup should be used as the experimental unit for statistical analysis:

 

“For example, when significant litter effects are present, a study with dosed groups comprised of 20 pups will have more power if each of the 20 pups is from a separate litter rather than having four pups from each of five litters. The false positive rates will be identical in both cases. Thus, the authors' emphasis on increasing the number of pups per litter rather than increasing the number of litters is misguided.”

Numerous studies have failed to follow this requirement, thus inviting disqualification when reviewed for risk assessment (Goodman et al 2006). Willhite (2008) also notes that the failure of various studies to account for inter-litter variability “increased the numbers of incorrect conclusions concerning the presence or absence of adverse effects.”

But more to the point, by 2006 large, multi-generational studies, conducted under rigorous protocols failed to show the low-dose effects that vom Saal continued to warn about in the media. One of the early and important contentions made by vom Saal was that BPA behaved just like the synthetic hormone estradiol at a cellular level. But Tyl et al (2006) controlled for animal strain, feed, and the use of positive control s in a two generation reproductivity toxicity study, and while the control, estradiol, produced effects, BPA didn’t. (Tyl et al, Two-Generation Reproductive Toxicity Study of Dietary Bisphenol A in CD-1 (Swiss) Mice, Toxicological Sciences 2008 104(2):362-384).

In 2007, research by the EPA’s Kembra Howdeshell, who had collaborated earlier in her career with vom Saal on several studies, found that lactational and gestational exposure to both estradiol and BPA over a broad range of orally-administered low dose endpoints in rats only produces effects for estradiol. (Howdeshell et al, Gestational and Lactational Exposure To Ethinyl estradiol, But Not Bisphenol A, Decreases Androgen-dependent Reproductive Organ Weights and Epididymal Sperm Abundance In The Male Long Evans Hooded Rat, Toxicological Sciences, Toxicol. Sci. 2008 102: 371-382; doi:10.1093/toxsci/kfm306) Neither of these studies received any mainstream media coverage.

The cumulative effect of all this research and statistical analysis is that vom Saal, though highly vocal about the risks of BPA and the media’s go-to source for explaining the science, has found his research and his claims repeatedly rejected in regulatory assessments of the chemical’s risk in the past decade.  When the Milwaukee Journal Sentinel claims he is considered a “leading authority” on the chemical, it is by virtue of the consideration of journalists and not his fellow toxicologists. His contention that BPA is highly toxic to humans has not been accepted by any major risk assessment conducted in the last decade. Indeed, EFSA went in the opposite direction, raising the reference dose for BPA by a factor of five, meaning that it considered the allowable daily intake for the chemical over the course of a lifetime to be significantly safer than had once been thought.

NSF International’s survey of the research on BPA concurred, but concluded that the reference dose should be slightly lower, to account for hypothetical neurodevelopmental risks, but even still, the quantity of BPA that we can safely ingest is vastly greater than the quantity of BPA that we actually do ingest.

And yet, the European Food Safety Authority and NSF International’s risk assessments have been largely ignored by the media; instead, reporters have continued to rely on vom Saal or scientists that he has worked with or who endorse his position, and environmental activist groups who have backed him.  

 

BPA and statistics: why not all studies are created equal

The NTP also created a statistics subpanel to provide “an independent assessment of the experimental design and data analysis used in each of the studies and, perhaps even more important, to identify and discuss key statistical issues relevant to all studies.” The statistics panel identified the following key points among others, which are reproduced here at length to illustrate the fact that in risk assessment, studies are not accepted or rejected for trivial reasons, which again is something press coverage fails to explain:

A. Study sensitivity (power) – One important experimental design consideration is a study's power, which is defined as the probability of detecting a treatment effect if it is present in the data. Study sensitivity or power is influenced by a number of factors: (i) sample size; (ii) the underlying variability of the data; (iii) the magnitude of the treatment effect that is present; and (iv) the method of statistical analysis and the associated level of significance chosen. Obviously, a larger study will generally have more power for detecting chemical-related effects than a smaller study. Moreover, the interpretation that a study is "negative" should be given more weight when relatively large sample sizes are used. The number of animals per group ranged from 3 to 179 in the studies that were re-evaluated, and this is a factor that must be considered when comparing and interpreting study results.

Importantly, the effective sample size of a study is the number of independent sampling units. Thus, if littermates are used and litter effects are present in the data, the effective sample size becomes the number of litters, not the number of individual pups.

B. Replication - Reproducibility of experimental results is an important and necessary feature of any scientific finding before it can be generally accepted as valid. There are several types of replication, which are discussed below. First, there is replication within an individual experiment. If multiple replications are used within a study, then each experimental group should be represented in each replicate. In one experiment we evaluated, three replicates were used, but the mid and high dose groups (which had only three animals per group) were represented only once, and in different replicates. Additionally, there were significant differences among the control groups in the three replicates, although the study authors pooled these groups in their statistical analysis.

This is not an ideal experimental design or data analysis. In another study we investigated, control and three dosed groups were each evaluated in separate time frames, extending over a period of one year.

The Statistics Subpanel felt that the lack of concurrent controls was a serious deficiency of the experimental design that greatly limited the general inferences that could be drawn from this study.

Another type of replication is the reproducibility of results among separate experiments within a given laboratory. In one publication we evaluated, the investigator carried out eight similar experiments with the same chemical, although these technically were not replicates, because different dose levels of the test compound were used in some experiments. This investigator found statistically positive effects on uterus weight in four experiments and no effect in the other four experiments. The author concluded that his investigation had shown that even the same investigator may be unable to repeat experimental findings, and we agree with this conclusion.

…Perhaps the most important type of replication is reproducibility among different laboratories trying to confirm the findings of another laboratory. Among the data sets we evaluated, there were several studies that attempted to duplicate the studies of other investigators. Some confirmed the original results, but many did not. It is difficult to achieve exact reproducibility of all aspects of an experimental design, and when conflicting results are obtained by different investigators, one should try to identify study differences that could account for the contradictory results.

The NTP panel also warned of unblinding studies prior to measuring organ sizes and weights as a source of potential bias, and it found that a failure to control for a single outlier in groups of animals (possibly simply due to placing a decimal point in the wrong position giving the animal an organ weight ten times greater than the mean) radically skewed the final results in some studies. It is the kind of detail that determines how regulatory agencies set reference doses and determine risks – but it is science and statistics at a level of complexity that is simply not part of the media and public debate on chemical and pharmacological risk.

USA Today, for instance, ran a large photograph of vom Saal with a mouse on his hand under the headline “Can a plastic ‘alter human cells’?” The subhead claimed that “scientists say the chemical can alter cell behavior at very low levels – in the parts per trillion range – yet humans are consistently exposed to BPA at levels 10 to 100 times greater.” Vom Saal is quoted saying “this is a phenomenally potent chemical.” USA Today reporter Liz Szabo turns to environmental activists to explain that

 “the FDA's standard is biased and outdated, leading the agency to discount a dozen key studies that the [National] toxicology program used to conclude that BPA may pose a threat.

Although the FDA's laboratory guidelines aim to prevent fraud by requiring detailed notes, they don't necessarily ensure good science, says Sonya Lunder of the Environmental Working Group, a private organization that says BPA is dangerous. Independent academic researchers are performing far more sophisticated tests than the ones upon which the FDA based its decision, she says. ”

USA Today did not tell readers that the National Toxicology Program dismissed most of the evidence against BPA and had negligible concern for the many of the risks vom Saal had been claiming for over a decade. Nor did it put “some concern” into the scientific perspective that this is very limited data. Instead, the reporter jumps immediately to say that because the FDA said BPA is safe, it was using “controversial” methods to assess the evidence. The Environmental Working Group, a highly visible activist group (weirdly described as “a private organization”) is allowed to explain why this is the case.

The explanation doesn’t actually deal with the “limited evidence” the NTP found for neurodevelopmental toxicity, it leaps to attack the general principles the FDA used to assess all the evidence for BPA, and the priority given to something that is called “Good Laboratory Practice (GLP).” While there are many valid criticisms of GLP (it is onerous and stifles creativity), as far as BPA goes, it means the EWG was claiming that studies with greater statistical power are inferior to studies with less statistical power, and studies that tested low dose exposures with just one low dose are superior to those which tested for low dose exposures with multiple low doses.

As already noted, the National Institute of Environmental Health Sciences  will no longer fund what the EWG claims are “far more sophisticated tests” on BPA, because they were actually not that sophisticated: It’s not simply about paperwork, these studies did not have sufficient statistical power (i.e., use sufficient sample sizes) or use multiple doses, or oral exposure routes.

In June 2008, Time magazine published a story “The Truth About Plastic,” which is almost entirely sourced to vom Saal, “a prominent member of a group of researchers who have raised worrisome questions in recent years about the safety of some common types of plastics.” There is no mention at all of any of the criticism of vom Saal’s work, save one line that mention’s that the FDA and European Union say there’s no danger. The only other scientist quoted in the piece recommends avoiding plastics as its better to be safe than sorry.

In “The Dirty Truth About Plastics,” Discover magazine featured vom Saal declaring that BPA “is the global warming of biology and human health,” and two other researchers who claim BPA is dangerous. No critics of their work were quoted  

Some astonishingly basic questions were not asked in newsrooms around the country: if multiple risk assessments around the world keep rejecting the same body of research on the same methodological grounds, and yet independent of each other, why are we exclusively promoting this rejected research and ignoring the methodological problem? If vom Saal’s arguments have been consistently ignored or rejected, shouldn’t we find out why? Shouldn’t this worldwide consensus that BPA is safe set off an alarm bell – or call into play some degree of skepticism the idea that BPA is, as the Environmental Working Group puts it, threatening the lives of “millions of babies” and is responsible for a host of major diseases? This skepticism simply didn’t occur in the American press with a few exceptions.

Fortune magazine’s Marc Gunther warned that the entire furore about BPA in 2008 had more to do with activism than science:

“scientific debate isn't driving the baby bottle war; a hard-hitting push by activist groups, politicians and trial lawyers is…

The BPA battles were fought like a political campaign, with catchy soundbites, press releases, personal attacks, and warring Web sites. The anti-BPA general is Dr. Frederick vom Saal. He has testified before state legislatures and appeared on TV to denounce BPA in terms that gloss over the scientific uncertainty. Referring to the fact that BPA is a mild estrogen, he says things like ‘the idea that you're using sex hormones to make plastic is just totally insane.’”

ABC 7 in San Francisco ran a long, Emmy-nominated investigation which featured both Willhite and STATS and focused on the controversy over studies that injected BPA and those that fed BPA.

But these were in the minority – every media organization put together couldn’t match the coverage given to BPA by one, small regional U.S. newspaper, the Milwaukee Journal Sentinel.

6, 7, 8, 9, ... 24

Next - What if BPA were tea? An analogy



Digg!

Technorati icon View the Technorati Link Cosmos for this entry