
This post is a natural follow up to the review of corruption statistics over the last few weeks, but actually itâs been on my list to explore since I watched John Oliver take on Scientific Studies (full clip hereâand I consider it required viewing, please!).
Here are some the main takeaways to consider:
1. Scientific studies are often misrepresented by the media. There are many reasons for that, but mostly it comes down to fear and humorâpeople are going to be more likely to read a story about chocolate causing/curing cancer than one saying chocolate tastes good. And who wouldnât click through when they see a story about how "smelling farts might prevent cancer?"
2. Scientific studies are often taken out of contextâthe fart one certainly was. (Spoiler alertâthere is no scientific study saying that smelling fats will prevent cancer.)
3. Scientific studies are often done in a sloppy mannerâno controls, too small a study pool, using inappropriate dosages, starting from a bias, or in many other ways. Most of those issues quickly come to light if you look at the actual study, but if you just see the media headline, you are left with the impression that it was a significant study of worth.
4. Scientific studies are not being independently duplicated enoughâbecause as Oliver pointed out, few people care who did it the second time around and there is âno Nobel Prize for fact checking.â The glory is in being the first to find something. (How many of you can name who was the third man on the moon? Iâm being generous is assuming you know the second who walked with NeilâŚ)
5. Something that is statistically significant may not actually be significant
Letâs go into some examples. Last week, I promised you a consideration of cabbage and belly buttons, right? That is an example of âp-hacking,â which is part of number 5 on the list above This is a serious issue, as this blog at the National Institute of Mental Health outlines, or as discussed in these blogs on the Public Library of Science website, one of which go so far as to say that most scientific studies are false. So just what is p-hacking?
Basically it is manipulating statistics. There are many different ways to do it, but the result is that non-significant results become statistically significant or results are deliberately created or developed in a way to give them false significance.
The 538 site has several excellent posts on p-hacking. In this one, they posted a chart showing the results of a survey they did that found, among other things, a statistically significant relationship in their survey between eating egg rolls and owning a dog and the use of salt with feeling good about your internet provider. Hereâs the full chart:
That particular post was focused on the challenges of doing meaningful dietary studies and if youâd like to be both grossed out and give up on taking food science seriously ever again, do go to the end of the post and watch their small video on the impossibility to compare, well, apples to applesâŚ.
Iâve done many previous posts arguing that we canât put everything into sound bites. We have to dig into the details. Hereâs another example from the site:
âŚa 2013 study found that people who ate three servings of nuts per week had a nearly 40 percent reduction in mortality risk. If nibbling nuts really cut the risk of dying by 40 percent, it would be revolutionary, but the figure is almost certainly an overstatement, Ioannidis told me. Itâs also meaningless without context. Can a 90-year-old get the same benefits as a 60-year-old? How many days or years must you spend eating nuts for the benefits to kick in, and how long does the effect last? These are the questions that people really want answers to. But as our experiment demonstrated, itâs easy to use nutrition surveys to link foods to outcomes, yet itâs difficult to know what these connections mean.
Isnât that the bottom line? We have to understand how the statistics are related to each other and to our own livesâŚnot everything is meaningful. Itâs like the corruption stats from the earlier postsâwe need to look at relationships and context to come up with meaningful applications.
The same site posted a very fine article called âScience Isnât Broken: Itâs just a hell of a lot harder than we give it credit forâ which concludes most appropriately:
The scientific method is the most rigorous path to knowledge, but itâs also messy and tough. Science deserves respect exactly because it is difficult â not because it gets everything correct on the first try. The uncertainty inherent in science doesnât mean that we canât use it to make important policies or decisions. It just means that we should remain cautious and adopt a mindset thatâs open to changing course if new data arises. We should make the best decisions we can with the current evidence and take care not to lose sight of its strength and degree of certainty. Itâs no accident that every good paper includes the phrase âmore study is neededâ â there is always more to learn.