Fool Millions Into Eating Chocolate With This One Weird Trick!
Yesterday, John Bohannon of io9 posted an article, “I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here’s How.”, that went absolutely viral. Bohannon ran an expose on the pervasiveness of bad nutritional science and bad nutritional science reporting by creating his own bogus study. It’s a fascinating read showing how the system breaks down if you can get through a few initial barriers. But I wanted to talk specifically about p-hacking, about what John & Co. did to get to 0.05 and out the door:
Use a tiny sample size
The bogus study used three groups - a control group, a group on a low-carb diet, and a low-carb group that also ate bitter chocolate. But each group had only 16 people to start with, where one or two outlier subjects could create a “significant” trend out of thin air. It’s the reason why John says that “almost no one takes studies with fewer than 30 subjects seriously anymore.”
Keep rolling the dice
The study also measured a lot of information from these 15 subjects - weight, blood pressure, circulation, sleep quality, and other measurements for a total of 18 possible metrics. The significance threshold they used was p < 0.05, meaning that the study would declare a trend “significant” if there was under a 5% chance of a false positive. Sounds good, right?Except if you start studying multiple variables, each of them has a 5% chance of being a false positive. With 18 results, there’s a 66% chance that any study with this threshold will return a false positive. This is called Data Dredging, and is an excellent way to create connections that only fit your study group. (I actually talked about this two posts ago!) This alone can compromise an honest study. But if you’re trying to make a bogus nutritional study that says something, not caring what it is…
The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one “statistically significant” result were pretty good.
Hope nobody notices
A reputable journal would have rejected this shoddily-constructed study out of hand. But unfortunately (or fortunately, if you’re John Bohannon), there are a lot of irreputable journals out there. Some basic fact checking at the countless websites, newspapers, and magazines that published this study would have shown that the scientific journal was a sham, that the institute was nothing more than a website, or that “Dr. Johannes Bohannon” didn’t even have a relevant PhD.How was this so easy? Simple. John & Co. made a very innocuous-looking press release, complete with a few stock photos and a concise, yet sufficiently scientific-sounding summary of key points. By choosing bitter chocolate as their “secret ingredient”, they picked something that sounded plausible. In the words of John Bohannon’s collaborator Gunter Frank, “Bitter chocolate tastes bad, therefore it must be good for you. It’s like a religion.”
By corrupting a study in just the right ways, Bohannon was able to find a statistically significant result. And by sidestepping a key part of the review process, his team’s sham work found traction in a popular, yet poorly-scrutinized area of scientific reporting: diet science.