If you follow health and wellness news on TV or other websites, this has likely happened to you: you come across a story saying that you should eat more of some food, say chocolate, because a new study shows it may help lower your risk of cancer or heart disease… but you swear that just a few months ago you read another article on a study saying you should avoid that food because it causes those diseases. This new article doesn’t mention that last study, so then you think maybe you’re just misremembering– after all it was a while ago, and there are so many studies about so many ingredients that it can be hard to keep track.
Turns out you probably weren’t wrong after all. Some recent meta studies (studies of studies) show that there’s conflicting research behind most foods we eat, and the media is terrible at helping us make sense of it all.
Study 1: Everything you eat both causes and prevents cancer
In a study published in the American Journal of Clinical Nutrition, researchers randomly selected 50 ingredients from recipes in The Boston Cooking-School Cookbook, and then looked for studies that evaluated each ingredient’s cancer risk. They found that 80% of the ingredients had been associated with either increased or decreased risk of cancer, and as Vox’s Julia Beluz pointed out, many ingredients had studies behind them showing both:
It’s not just cancer either– we’ve seen conflicting studies on whether many of those same ingredients are associated with other chronic diseases, like heart disease, obesity, and diabetes.
Conflicting research doesn’t necessarily show that there’s no association (although for many foods that may be the case), but it does mean that knowing just the results of a single study isn’t all that helpful. You have to know (1) how the study was done– were there a large number of participants? what kind of study was it? was it designed to rule out other variables?– and (2) how the results fit into the larger picture of previous work on the subject. As Beluz writes, “The truth can be found somewhere in the totality of the research.”
However, often when news stories cover the latest food study, none of those questions are asked– or if they are they’re buried toward the end, where readers who just skim the headlines will never see them.
Study #2: “A fairly typical study for the field of diet research. Which is to say: It was terrible science.”
Take a recent study by Johannes Bohannon, research director for the Institute of Diet and Health, showing that eating chocolate can help you lose weight. The study, which was published in the International Archives of Medicine, found that people on a low-carb diet lost weight 10% faster if they ate a chocolate bar every day.
It would be great news for chocolate lovers except… there is no Institute of Diet and Health. It’s just a website, and Johannes Bohannon is really just a journalist named John. The study was real though– Bohannon and his colleagues recruited actual human subjects in Germany and did an actual clinical trial. He writes:
“It was, in fact, a fairly typical study for the field of diet research. Which is to say: It was terrible science. The results are meaningless, and the health claims that the media blasted out to millions of people around the world are utterly unfounded.”
In an article for the website io9, Bohannon explains how he designed the study so that it was virtually guaranteed to get an impressive-sounding result that was still totally meaningless:
Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.
Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one “statistically significant” result were pretty good.
Statistical significance is determined by a study’s p value. The traditional cutoff for a result being “statistically significant” is a p value of 0.05, which means that there’s just a 5%– or 1 in 20– chance that the result is just a random fluctuation. But if you’re measuring 18 different things, then odds are pretty good (60% actually) that you’ll get a “significant” result on at least one of them just by chance.
The first lines of defense against this kind of bad science are scientific journals. Credible journals are peer-reviewed, meaning they first send studies to other scientists working in that field to determine if their methodology is good enough to publish. This chocolate study was so flawed that editors of most reputable journals would reject it out of hand before it even made it to peer review. The problem is that there are plenty of not-so-reputable scientific journals (with reputable sounding names like International Archives of Medicine) that will publish basically anything that’s submitted to them for a small fee. Bohannon says that his chocolate paper was accepted by multiple journals within 24 hours.
That leaves it up to the media to determine whether a study is credible. If a journalist had done the basic work of putting the chocolate study in context– looking at its methodology or running it by any other nutrition scientist, they would have seen just how shaky Bohannon’s “research” was. Instead, the story was picked up by Bild (Europe’s largest daily newspaper), Shape magazine, Prevention magazine, The Daily Mail, Huffington Post, Cosmopolitan’s German website, and The Times of India among others.
Study 3: Most nutrition studies may be “fundamentally and fatally” flawed and “essentially meaningless”
Meanwhile, if Bohannon’s chocolate study shows how easy it is for bad nutrition research to spread, a paper published in the journal Mayo Clinic Proceedings shows that even some of the better studies should be re-examined.
The problem, the researchers say, is that most studies on food and obesity in the U.S. are based on “memory-based dietary assessment methods”– like interviews, questionnaires, and surveys– and most people are terrible at remembering what they actually ate. For example, in the National Health and Nutrition Examination Survey, which has been conducted since 1971, the amount of calories reportedly consumed by 67.3% of women and 58.7% of men was “not physiologically possible.” (An editorial in the British Medical Journal put it another way, saying that these reported calorie intakes are “incompatible with life”.)
The researchers write:
“The use of [memory-based surveys] requires faith in the belief that human perception, memory, and recall are accurate and reliable instruments for the generation of scientific data. Nevertheless, more than 80 years of research demonstrates that this belief is patently false.”
What does work: study the studies
The obvious question then is if the media reports on pretty much any new nutrition study, regardless of how good the science is behind it, how are we supposed to tell which studies are the good ones?
One solution to the problems in all three studies we mentioned– conflicting results, poorly designed studies, and over reliance on self-reporting– is what’s known as meta-analysis. A meta-analysis is a study of all the previous studies on a subject that combines their results statistically, giving more weight to larger, better studies.
That first AJCN study we mentioned, showing the conflicting results of various cancer studies, also looked at the relevant meta-analyses the for each ingredient as well (so yes, the ACJN is a meta-analysis of meta-analyses– basically the Inception of nutritional research). What they found was that in meta-analyses of studies for each ingredient, the associations with cancer would tend to shrink or disappear. In other words, the meta-analyses seem to help filter out some of the noise from smaller, conflicting studies.
That’s not to say that these studies of studies are a silver bullet– like any kind of science, some are stronger than others. But if you see the words “systematic review” or “meta-analysis” in an article on the latest food study it’s a good indication that you can take it a little more seriously.
Good piece.