Adam Larson sent me the following question about a study of obesity and a press release about it from NPR. The claim, made in both the press release and the underlying article, is that weight discrimination makes the already obese 3 times as likely to remain obese, and the non-obese 2.5 times as likely to become obese. Adam writes:
They interpret odds ratios of 2.5 and 3 as “2.5 times as likely” and “3 times as likely”.
Balderdash, yes? I assume what they’re getting at is that in one group something like 85% remained obese; in the other 75%. This gives an odds ratio of (.85/.15)/(.75/.25)=1.89
So common sense would call it a 10 percentage point decrease or a 12% decrease, right?
Adam is spot-on. An odds ratio is the odds of an event happening for one group divided by the odds of a thing happening in another. Odds are summaries of probabilities that get used by sports books and nearly no one else, because they are counter-intuitive non-linear approximations to probabilities. If an event has an X% chance of happening, the odds that it happens are (X%)/(100-X%). The basic problem with odds ratios is that long ago someone (we should figure out who and curse their name) realized that for rare outcomes, an OR is approximately a relative risk, or (% chance thing occurs in treatment group)/(% chance thing occurs in control group). That is:
(0.01/0.99)/(0.02/0.98) ≈ 0.5 = 0.01/0.02
That has ever since been taught to applied statisticians working in certain fields (public health is one example) who use odds ratios for the scientifically important reason that they are the default output of many regression packages when you run a logistic regression.*
And so people misinterpret them constantly, presently odds ratios as relative risks even when they are not small, and the approximation does not hold. This is even before we get into the fact that calling a change from P=0.01 to P=0.02 a “100% increase in risk” is itself fairly absurd and misleading. It’s a one percentage-point increase. There is no intrinsic sense in which “the risk tripled” actually means anything. Did you know that if you go in the ocean you are infinity times more likely to get eaten by a shark than if you stay on land? (You probably did, but it’s a stupid number to think about. What is actually relevant is that the absolute risk went up by some fraction of a percentage point.)
For this paper, under the assumption that their regression adjustment doesn’t change too much, we can actually back out what the percentages really are. First, the effect on the not-initially-obese:
Mean outcome = (% discriminated)*(mean for discriminated people) + (1 – % discriminated)*(mean for non-discriminated people)
0.058 = 0.08X + 0.92Y
Odds ratio = ((mean for discriminated people)/(1 – mean for discriminated people)) / ((mean for non-discriminated people)/(1 – mean for non-discriminated people))
(X/(1-X))/(Y/(1-Y)) = 2.54
Y = (50X)/(127-77X)
0.058 = 0.08*X + 0.92*(50X)/(127-77X)
X = 0.1230
Y= 0.0523
So the change is 7.2 percentage points. Put less clearly, P(became obese) has gone up by a factor of 2.35 for those who experienced weight discrimination, relative to those who did not. That is different from the OR of 2.54, but their figure isn’t too far from the relative risk.
Repeating the process for their other analysis, however, reveals how misleading ORs can be:
0.263= 0.08X + 0.92Y
(X/(1-X))/(Y/(1-Y)) = 3.20
Solving these equations for X and Y gives us:
X = 0.505
Y= 0.242
Here the risk ratio is 2.08, not 3.20. The percentage-point change of 26.3 remains completely comprehensible, as it always is. Misusing odds ratios here allowed them to overstate the size of their effect a factor of 50%! I suspect, but am not sure how to prove, that with regression adjustment these figures could look even more misleading.
As most people who read this already know, even if presented correctly the figures wouldn’t mean anything. There’s no reason to believe the relationship being studied is causal in nature. Indeed, it probably suffers from classic reverse causality: people who are gaining weight (or failing to lose weight) are likely to perceive a greater degree of weight discrimination. But presentation matters too. First, clear presentation can help us make use of studies, even when they are as limited as this one is. As the above derivation illustrates, figuring out what an odds ratio actually means involves 1) the annoying process of scrounging through the paper for all the variables you need and 2) solving a system of two equations for two unknowns, which most people can’t do in their head. This detracts very substantially from a paper’s clarity: in general, when I see odds ratios presented in a paper, I have no idea what they mean. An OR of 2 could mean that the risk went from 1% to 2% or (to use a variation on Adam’s example) from 75% to 86%, or a whole host of other things.
Second, poor presentation has consequences. Health risks are often reported using relative risks, or, worse yet, using ORs that are presented as relative risks. This is often extremely misleading, since a doubling of risk could mean that the chance went from 0.001% to 0.002%, or from 50% to 100%. Misleading and confusing people about risks undermines the basic goal of presenting health risks in the first place: to help people make better decisions about their health.