More on why science and the media don’t know much about nutrition, part 1

by Alan Cohen

After my last post, I’ve decided to expand this theme with a series of posts, as many as it takes to deal with this in some depth. This post will largely recapitulate my last comment to Ju-hong on why nutritional studies are harder to get right than other studies. In the process, I will introduce the statistical/epidemiological concepts of interaction and confounding (not the same!). Don’t worry, I promise to make it intuitive!

Imagine you are a researcher interested in seeing if eating grapefruit increases the risk of breast cancer. Why are you interested in this silly, obscure question? Well, it turns out that grapefruit contains chemical compounds that have been shown to increase estrogen levels, and high estrogen levels are thought to increase breast cancer risk. So it’s actually a reasonable question… So what do you do? Well, like any good researcher, you go out and recruit 50,000 post-menopausal women to fill out your surveys on their dietary intake, and then you follow them for a few years to see who gets breast cancer. Kind of morbid? Welcome to epidemiological research…

There are several problems you are likely to encounter in producing valid results:

Small effect sizes

Smoking really really really increases your risk of lung cancer. As in, if you smoke, you might be 100 times more likely to get some kinds of lung cancer. This makes the association easy to find. You don’t need a million study subjects, and you don’t need to worry about getting all the details of your study right – if you do any study, you’ll find an effect. If you get the details wrong, you might estimate the risk at 10-fold, or 1000-fold, but you won’t miss the general direction of the effect.

Can I see a show of hands for who thinks eating grapefruit will give you 100 times the risk of getting breast cancer? I didn’t think so. Rather than expect an effect 100-fold or even 10-fold, we expect an effect of maybe 10% increased risk (1.1-fold), or 50% (1.5-fold) if the effect is really strong for a food item. Certainly less than 2-fold. This is obvious – most stuff that we eat is not downright toxic, so the effects are subtle.  Subtle effects are small and hard to measure. It takes a huge study with lots of participants, and if you have even a small amount of error or bias, the error will overwhelm the actual effect (as explored in the last post). So it’s hard to detect the effects of diet on health, and even harder to be sure that you have.


Let’s imagine for a moment that our suspicion is right, and high estrogen causes breast cancer. But hormones are complicated – maybe very low levels of estrogen also cause breast cancer. In this case, the effect of the estrogen we take in through grapefruit will depend on what else we eat. Maybe someone who eats lots of soy (full of phyto-estrogens, yay!) and lots of grapefruit will have an augmented breast cancer risk, as will someone who eats lots of grapefruit and lots of non-organic beef (full of added estrogen, not yay 😦 – don’t you love how we market some estrogens and market against others?). But if you eat lots of fish (not full of estrogen) and grapefruit, maybe you’ll have lower risk than if you eat fish and no grapefruit.

OK, that’s a lot of hypotheticals. But those hypotheticals are plausible hypotheticals – we can’t discount them. We also can’t discount that, once in the body, estrogen levels are regulated. They’re regulated by lots of things, including the immune system.  And lots of things we eat affect our immune system. Many antioxidants, in addition to being antioxidants, control aspects of immune function. Omega-3 fatty acids too. So pretty soon we realize that if we want to predict how diet affects someone’s estrogen levels, we had better measure pretty much everything they eat, and know how these things interact at different levels in the body. And we’re a long way from knowing that.

In this scenario, eating grapefruit does have an effect on breast cancer risk, but how strong that effect, and whether it’s positive or negative, depends on all the other things you’re eating. These are interactions. The effect is real, but it depends. In order to figure out how it depends, you have to add in interactions to the statistical model. If you add in one other component of diet (say soy), you need to add two components to the model: soy itself, and the soy-grapefruit interaction. If you add in two other components (say soy and meat), you need to add in six additional components: 1) soy, 2) meat, 3) soy-grapefruit interaction, 4) meat-grapefruit interaction, 5) soy-meat interaction, and 6) soy-meat-grapefruit interaction. With three diet items, it’s up to 15 model components, and with 10 diet items, it’s over 1000 model components. And with more model components, we need a larger sample size. Even if we included every person in the world in our study, we’d be far from having enough people to do a full model with 100 diet items! (And 100 items is far from enough to fully characterize what people eat.)

Of course, if we know which interactions are important, we can include those and leave out the rest. But the problem is we don’t: maybe cinnamon consumption completely mitigates the effect of grapefruit, but only in people who also eat lots of zucchini and not too much shrimp. We just don’t know where to begin. So in the end, we may detect an effect of grapefruit on breast cancer, but it’s an average effect for the people in out study. The study gives us next to no information on what to recommend, since many people may be harmed if they don’t eat grapefruit, and the particular effect we found may depend a lot on what people in our study were eating.


Interactions are when real effects change depending on conditions. Confounding is when something that appears to be a real effect isn’t really – it’s just a correlation between two other things. For example, imagine that people who eat grapefruit also eat brown rice, and that brown rice causes cancer. Unless we include brown rice in our model, it will appear as if grapefruit causes cancer, even though it doesn’t. So we need to control for brown rice in order to see what the real effect of grapefruit on cancer might be. Brown rice is a confounder. Unfortunately, as with interactions, almost everything we eat might be a confounder. Most items in the diet will be likely to co-occur with many other items in the diet. People who eat Doritos will also drink Coke. People who eat lobster will eat butter and lemon. People who shop at Whole Foods are likely to eat hormone-free beef, organic asparagus imported from Chile, and microwaveable soy patties.

So if we want to pinpoint the effect of grapefruit on breast cancer, we need to identify every food that is more or less likely to be eaten by people who eat lots of grapefruit. We need to measure consumption of all of them, and even small errors in how accurately we measure them could be a problem. And that’s before we get to non-dietary confounding factors: are people who eat grapefruit more or less likely to smoke? Are they richer or poorer? Are they more likely to live in Florida? Do they have relatives who work for Tropicana? To be model train aficionados? (Who knows? Maybe model trains cause cancer and stimulate a hunger for grapefruit…)

Of course some of these scenarios are absurd. No study is perfect, and usually we hope that many different studies will show consistent results. But the problem for studies like this one on grapefruit (and most studies in nutritional epidemiology) is that all these difficulties come together at the same time with particular severity. Sure, lots of things can interact with smoking (some genes, for example), and smokers also drink, so there’s a potential for confounding. But the effect size is huge, and the number of plausible interactions and confounding factors is much smaller, allowing us to effectively control for them and to build up a base of evidence across studies.  With nutritional studies, we have a perfect storm of lots of plausible interactions, lots of plausible confounding, difficulty in accurately measuring or even identifying these factors, and then very small expected effects which will be swamped by even minor biases in our studies. So we spend lots of money and conclude lots of nothing…

Or, rather, we spend lots of money, draw lots of unsubstantiated conclusions, spread these conclusions far and wide through the media, and encourage unhealthy fad eating.