Science is amazing, but science reporting can be confusing at times and misleading at worst. The folks at Compound Interest put together this reference graphic that will help you pick out good articles from bad ones, and help you qualify the impact of the study you're reading.


One of the best and worst things about having a scientific background is being able to see when a science story is poorly reported, or a preliminary study published as if it were otherwise. One of the worst things about writing about science worrying you'll fall into the same trap. It's a constant struggle, because there are interesting takeaways even from preliminary studies and small sample sizes, but it's important to qualify them as such so you don't misrepresent the research. With this guide, you'll be able to see when a study's results are interesting food for thought that's still developing, versus a relatively solid position that has consensus behind it.

You'll see some common afflictions here, like studies and articles that discuss correlation but don't point to causation (and it's important to read studies before trotting out that aphorism too—some studies actually do account for causation, but it's overlooked when reported), stories on preliminary studies with small sample sizes or narrow selection (eg, "A study of 32 Swedish Men from the same town revealed...") and so on. It's not a perfect chart, and many of the commenters at Compound Interest rightfully call them out on certain items, but it's a useful reference for honing your critical thinking skills when reading science news and reporting.


You can see the chart below, download a PDF version here, or hit the link to Compound Interest to read more or even buy a wall print for yourself (or your classroom!)

A Rough Guide to Spotting Bad Science | Compound Interest