5. Is the risk relative or absolute?
In the absence of checking that the results are “statistically significant”, a risk factor of less than two is more like to be due to chance than a study finding a risk factor of three or four.
Even when the results are statistically significant, risk increases of less than 200 percent (a factor of two) are more likely to have an unanticipated bias that could falsely imply an actual risk. This is especially the case in observational studies, where many factors can inadvertently skew the results. If an observational study links something to a 30 percent increased risk of cancer, even if it is a statistically significant association, it may not be reliable. While these small increased risks may sound alarming, most scientists do not consider the risk significant enough to prompt a major change in behavior without additional evidence.
Another factor to take into account is the way in which the numbers are presented. For example, a study may find that exposure to a certain pesticide increases your risk of a particular cancer by 200 percent. But if the initial risk for that cancer is one in 10,000, a 200 percent increase means that two more people per 10,000 will be affected, for a total of three in 10,000.
The percentage of increased or decreased risk is called relative risk, because it is relative to some other, established figure. It can also be expressed as a factor; for example, the exposure to the pesticide above makes you three times as likely to get this cancer. The actual risk level is called absolute risk, e.g., "the risk of X over a lifetime" is one in 10,000. It’s important to consider this context (i.e. the actual risk level) when determining whether the risk is big enough and well enough supported to change your behavior.
two (better yet, three or four) is the magic number
Statistical significance is a measurement of how likely a result might be due to chance, random coincidence. Large increases in risk — like a study that shows that smokers are ten times as likely to get lung cancer as nonsmokers — are less likely to have happened by chance. If there is no other bias in the study, statistical significance guarantees that there is a 95 percent chance that the study’s result reflects what is happening in the population at large.
However, there are many studies showing a statistically significant result with a small increase in risk, such as a 10-50 percent risk increase (or a relative risk of less than 2). Should we believe them, and what does the research community say?
The answer is: it depends on who you ask, what your other corroborating evidence is, and what your goal is. The New England Journal of Medicine, one of the most prestigious journals in the world, does not typically publish studies that show less than a three or four-fold increase in risk.
On the other hand, the Journal of the American Medical Association, another very prestigious journal, regularly publishes studies demonstrating small risk increases. For a journal, the question is whether the analysis of the data is interesting to the research community to be published.
A small increase in risk is interesting if there is a reasonable mechanism that provides a sensible explanation; for example, studying for four hours or more for an exam, as compared to three hours or less, might correlate with getting a higher exam grade. If the increase is small — e.g. if only ten percent more students pass the course, it is still believable. However, an association without any clear mechanisms – such as the debatable link between alcohol use and breast cancer — is much more dubious.
The type of bias that might have crept into the study is another important factor for evaluating how big a deal a small increase in risk is. For example, a study on breast cancer has many, many factors that could skew the results, from recall bias (when those who have cancer are more — or perhaps less — likely to report what they ate) to an undetected confounding factor. For this reason, many researchers do not find a study convincing unless the risk factor increases the odds by at least two times, and some would say three or four times.
In a controlled study, however, many of these issues go away. For example, if we want to see if Viagra causes heart attacks, we take two groups of volunteers and randomly assign them to two groups, A and B. Then we give Group A Viagra, and Group B a placebo. A small, statistically significant increase in the heart attack rate is convincing evidence that the association is “really there.” We don’t have to worry, as we would in a retrospective study, whether those who got heart attacks were more likely to remember that they had taken Viagra than those who didn’t.
Small observed increases are also of note when they corroborate several other studies. If many different studies (with different techniques and protocols) show a link between two occurrences, then it is more reasonable to believe the link; however, you still have to check that these other studies don’t have biases — or that other studies that couldn’t justify the link were simply not published.
Overall, big increases are those we should be most willing to believe, especially if we are really searching for some causality behind it all (and plan, therefore, to change our behavior). “Smoking increases the relative risk for lung cancer by a factor of 10,” says George Gray, formerly the acting director of the Harvard Center for Risk Analysis. “It really jumps out at you. If the relative risk is less than two, maybe less than 1.5, I’d need a lot [more independent studies showing the same result] to get worried about it.”
For more, se our FAQ entry for What is the difference between absolute and relative risk?