Why we shouldn’t trust news stories on scientific studies about what foods are healthy
by Alan Cohen
The blog newsofthetimes has a nice recent post on how new scientific studies are constantly changing conventional wisdom about how to eat healthily. As a health researcher who does some work with nutrition, I have often remarked on exactly the problem she points out, and I have asked myself who is at fault for our rapidly changing guidelines to healthy eating. I think there are largely 2 guilty parties. First is the media. They know that studies like this make good headlines, even when scientists view the results as tentative. They’ve got us jumping this way and that every time there is a new study, which is often.
The second guilty party is researchers themselves (ourselves). We all know that correlation is not causation, and most of the analyses we do go to great lengths to take this into account. But often these lengths are not great enough, or fail to appreciate how complex the world is.
Let’s say that I try to study the link between beta-carotene (a nutrient found in many fruits and veggies) and cancer risk. I conduct blood tests to measure beta-carotene and follow patients for years to see who gets cancer. Well, we all know that in modern western countries rich people eat more fruits and vegetables, and that being rich has lots of other intangible benefits which are likely to lower cancer risk. So we had better control for socio-economic status (SES), to make sure we don’t find a link between beta-carotene and lower cancer risk that is just due to the additional benefits of being rich.
Unfortunately, our measures of SES are imperfect (income and education don’t fully describe the cultural differences of class), so we are likely to only partially succeed in controlling for SES in the analysis. This problem is widely under-appreciated by researchers. At the same time, because rich people eat more fruits and veggies, when we control for SES we may inadvertently get rid of the statistical pattern we want to see – the link between consumption of fruits and veggies containing beta-carotene and cancer risk.
So after our (long, expensive) study, we publish results in some fancy journal showing that there is (or isn’t) an effect of beta-carotene on cancer risk. The results are highly significant, and the media eats up the story. The readers eat up their fruits and veggies, boosting sales of blueberry juice and acai extract. But in the end, we don’t know anything: if we find an effect, it could be just due to bias, because we haven’t sufficiently controlled for SES. If we don’t find an effect, it could be because we controlled away the real effect when we included SES in the model. So a few years later, someone else will come along, find the opposite, and conventional wisdom will change again.
We expect the effects of individual items in the diet to be relatively modest, and this makes the problem worse because we are unable to detect really strong effects of any one food.
My advice: read Michael Pollan, eat like our (ancient) ancestors, and ignore all health advice you hear in the news. The interactions of different foods are far too complex to be properly studied with the approaches normally used, the risk of unforeseen biases too great, and most such studies are a waste of money.
A third frequently guilty party, at least implicated in your post and by far the most numerous, is common consumers of information, who could stand to be much more critically minded than they often are. Although some folks may be better prepared than others for a career of doubt, everyone is at least capable of serious reflection about things seen, read and heard. Serious reflection is one of the great hallmarks of our species after all. So what turns it off in some cases, and makes those people willing to listen only to what they already believe? How might constructive doubt be nurtured on a massive, national scale? How might listening across perspectives be entered into a school curriculum, made a public agenda?
Yes, consumers of information are part of the equation too. I hesitated to mark them as a third party because I’m not sure there’s much that can be done, or if blame is even fair. In my case, for example, it pretty much took being a professional epidemiologist to understand how to filter this kind of information, and I can’t expect everyone to have advanced degrees in everything. That said, we could educate people to think more critically in general, and we could expect more of them…
Having used the word “guilt” myself, I agree that blame isn’t the point. Responsibility is a better notion, and the idea is to educate people into a more radical sense of their own agency, and out of the passiveness built into educational models that conceive of students as receptacles for knowledge, as bodies in which to reproduce a state of the art or a status quo. Granting that teachers know more than students in the subjects they teach, especially in the early encounters, a renovated system would from the earliest years cultivate a student’s sense of her own creative, collaborative role in engaging new ideas and processing information in a way current models do not. I think it was Yeats who said (something like) “education is not the filling of a pail, but the lighting of a fire”, and while American (and Canadian) education is largely and perhaps increasingly the former type, one can easily imagine contrasts in the kinds of agency each approach produces.
The first we see all around us, in easy susceptibility to campaign guile, in tendency to follow what we already believe and resulting gridlock across viewpoints, in loss of ability and desire to listen, in the way standardized education gets conceived, measured, promoted and funded, in perennial cutbacks to disciplines and institutions (eg. the arts) that conform poorly to these mass measures of progress and achievement. The second sort of agency, based in a student’s conviction about his own active role in generating critical understanding, would tend to question rather than automatically assimilate new information, views, claims, representations, structures, etc. The point of that system would not so much be to produce advanced degrees, which can encourage tunnel vision as easily as critical thinking. It would be to produce better, and better-thinking citizens, more capable of constructive accommodation and problem solving in a wide range of areas. Such people would resist better the various mass opiates we consume, whether religion, journalism, advertising or campaign politics, by grasping how all of these use, while trying to hide, our own complicity in accepting a message.
As to implementation, feel free to respond. I haven’t a clue.
I agree and disagree! (So many ideas that it would be hard to have on opinion on them all.) Yes, education should be the lighting of a fire, though it rarely is. And I think that the US is probably one of the best places in the world for lighting a fire, even if our record is markedly unimpressive. But compare us to Japan or Korea and you get the picture… They are much better at filling pails (even if the pails are leaky more often than not). So I tend to be quite disheartened because I don’t know of any societal model where a public education system has truly succeeded in lighting fires and encouraging critical thinking in a large segment of the population. The Netherlands perhaps? I don’t know their system well, but I tend to have the impression that their system is very good at filling pails selectively and critically, if not quite lighting fires.
I agree strongly that we emphasize what we can measure (even if we measure it poorly), and this leads to the overuse of “stats” like we see in police forces, The Wire, and academic markers of success (grants and publications). How easy it is to think that if we can’t measure it it’s not important! Usually the reverse is true, or we emphasize the metric over the measured phenomenon, leading to bias as people learn to subvert the metric.
Having done some of my first teaching this semester, I feel I’ve had great success lighting fires. But it’s a constant battle against the system to do so – the system want me fulfill metrics, at the expense of lighting fires. Fire-lighting is very much a personal undertaking, so as much as I am a believer in systems and their optimization, I think a systemic approach in this case may be exactly the wrong one, except to have the system get out of the way as much as possible.
The problem is that there’s a trade-off (there always is), and that the more we step back and allow fire-lighting, the less guarantee there is that the worst-performing teachers and students have at least a minimal level of success. There may be small corners of education where we can avoid this trade-off, but I don’t think we can avoid it overall.
I wonder why nutrition/health studies are harder to conduct than other disciplines that use similar analyses.. Or are all of their results dubious..? For example, there are probably tons of studies about the long-term effect of childhood experiences on later outcomes in life doing similar analysis. Both would need to control a lot of things, and there are many things that you can’t even get with the “data” (numbers).
I think most of the results are dubious: most published research can be shown to be false. I have a lot of doubts about many of these childhood studies too. But I think nutritional studies are particularly bad for several reasons:
1) The expected effects are small for each element in the diet. The statistical power needed is thus huge, and most studies aren’t big enough to identify the effects with certainty.
2) The effects are almost certainly subject to complex interactions. For example, the effect of eating lots of grapefruit on cancer risk may be different for people who eat a lot of meat or a little meat, or who eat kale versus spinach, or for people who eat 3-5 oz of meat and one leaf of kale daily compared to everyone else… We have no idea what these interactions might be, and we can’t measure or test even a small fraction of them, so we end up with an average effect. That average effect is not particularly likely to apply to most people, since most people don’t eat like the “average” person measured by the average effect.
3) There is almost certainly complex confounding. Interactions are when the effect differs depending on what else you eat: grapefruit could be either helpful or harmful depending on the rest of your diet, but the effects are real either way. Confounding is when we appear to have an effect that isn’t really there, just because two other variables are associated. For example, people who eat lots of grapefruit may also tend to eat lots of brown rice, and brown rice may cause cancer (hypothetically – I don’t think this is true!). If we don’t control for how much brown rice people eat, it will appear that grapefruit causes cancer, when in fact brown rice is the culprit. Since most items in people’s diets are correlated positively or negatively with mot other items, it will be essentially impossible to sort out the true culprits.
Interactions, confounding, and small effect sizes can be problems in lots of studies, but rarely do they all come together in such a problematic way as for nutritional epidemiology.
[…] a small amount of error or bias, the error will overwhelm the actual effect (as explored in the last post). So it’s hard to detect the effects of diet on health, and even harder to be sure that you […]
[…] Make the World Better […]