Ever see a headline boasting of an outrageous conclusion that some new scientific study found? These headlines pop up regularly, and they are a boon for publications that get lots of eyeballs reading their articles about the shocking new findings. Factory farming is actually good for the environment! Labeling genetically engineered foods will raise your grocery bills! Wow, really? No way!

These too-good-to-be-true – or too-bad-to-be-true – headlines are accurate, in that there was a study and it did come to those conclusions. But how accurate was the study? As it turns out, scientific studies with phony findings are not as uncommon as they should be. And far too often, bad journalism results in the uncritical reporting of these phony findings.

If you ever read about a study finding that all-cupcake diets are the key to longevity and good health, read the study to see whether the cupcakes tested were made from spinach and wheat germ. Here are some favorite tactics used to design a study to get the findings you’re looking for.

1. Start With a Wrong Assumption

If you live in California, you may recall hearing how labeling genetically engineered (GE) foods would increase your grocery bill. When a ballot initiative to label GE foods was first announced, voters overwhelmingly approved it. But by the 2012 election, it narrowly lost. A study “proving” that GE food labeling would make food costs in California skyrocket may be why voters had such a sudden change of heart.

How was that conclusion reached? The study authors – partially funded by the “No on 37” campaign – began with a wrong assumption. American consumers are just like European consumers, they figured. And, just like in Europe, when GE foods must be labeled, most food manufacturers will instantly remove all GE ingredients from their products. Because GE ingredients like corn and soybeans are present in almost all processed foods, reformulating every food sold in California to remove them would be massively disruptive to the food industry. In fact, it would raise food prices!

But their assumption is wrong. Europeans are willing to pay more for non-GE food, but most Americans aren't. So why would food manufacturers reformulate their products, resulting in higher prices, if they know most Americans wouldn't pay for it? They wouldn’t.

The average voter in California never heard these details. They just heard that their food prices were going up unless they voted no on Prop 37. So they did.

2. Throw Out the Data You Don’t Like

In 2012, researchers in the Netherlands published a study finding that organic agriculture yielded only 80 percent as much as conventional agriculture. Wow, is that true? Well, if it is, this study certainly doesn’t prove it. The study doesn’t actually prove much of anything…because the researchers disregarded any data they did not like.

Michael Hansen, a senior scientist at Consumers Union and a formidable agriculture expert, read the study and reflected that, “When you actually look at the paper, you'll see it's incredibly biased in favor of conventional ag, but in a very technical way.”

The study authors essentially picked and chose which data to include, excluding any “organic” method that did not meet the very strictest definition of the term organic, and throwing out any data from conventional systems that included “unrepresentative yield levels.” Details provided on that say: “Yield data for industrialized countries were considered unrepresentative if conventional yields appeared to be far below the regional average, unless this was caused by factors that can also occur in real farming situations, such as pests, diseases or droughts. For developing countries ‘unrepresentative’ implied conventional yield levels that seemed to be far below yields achieved under best farmers’ management.”

“In other words,” wrote Hansen, who is well-traveled and very familiar with agriculture in the Global South, “for developing countries, rather than compare organic to what local farmers that use some chemicals find, they dismiss all but the yields achieved using lots of chemicals under the best conditions – which is not what farmers actually face in developing countries.”

“Finally,” Hansen concluded, “look at their main hypothesis: 'Our hypothesis was that the closer conventional agriculture gets to the potential or water-limited yield, the larger the yield gap between organic and conventional systems will be.' In other words, they're not interested in the conditions that farmers face in the real world in developing countries.”

3. Oops, We Can’t Detect the Chemical in Question

Whenever you hear that there is none of a chemical in something, it’s time to ask: What’s the detection limit? What’s the smallest amount of the chemical they were looking for?

Take, for example, early studies of the pesticide Imidacloprid by its manufacturer, Bayer. A recent European report tells what happened when beekeepers began alleging that the pesticide was the cause of mass bee die-offs. Bees consume nectar and pollen, so Bayer’s first step was checking to see whether any of its pesticide was present in nectar or pollen.

In 1993, Bayer set the detection limit at 10 parts per billion (ppb). When it tested for its pesticide in nectar or pollen of treated crops, it couldn’t find any or it couldn’t quantify the amount detected. Clearly, this pesticide wasn’t harming any bees, because the bees weren’t exposed to it in nectar or pollen!

Six years later, in 1999, a study of sunflowers found 3.3 ppb of the pesticide in pollen and 1.9 ppb in nectar – amounts far below the previous 10 ppb detection limit! And, in 2001, scientists found that chronic exposure to 0.1 ppb of the pesticide kills a bee in 10 days. How much money did Bayer earn from pesticide sales while it stalled the science for six years with its high detection limit?

4. Findings That Aren’t Statistically Significant

“GMOs Cause Tumors in Rats” screamed the headlines after a French scientist named Seralini published findings based on a two-year feeding study using Monsanto’s Roundup Ready corn and its pesticide Roundup. Oh my god, this is terrible! Americans have been eating the variety of Roundup Ready corn in question since 2001! We’ll all get cancer!

Okay, take a step back. Were the findings statistically significant? No, they weren’t. Consumers Union scientist Michael Hansen points out that the study only used 10 rats of each sex for each group tested. After publication, the study’s author also noted, “the sample size of their treatment groups was too small to allow them to draw conclusions with regard to long-term carcinogenicity and mortality.” Oops.

That said, in this case, Hansen points out that the data “suggests that there might be something there.” For the most part, the control rats were healthier than the rats that were fed Roundup or Roundup Ready corn. According to Hansen, if the findings were entirely random, one would expect that the number of control group rats afflicted with each morbidity would sometimes be more and sometimes be less than the number of sick rats in the treatment groups. He’d like to see further study using a statistically significant sample size.

5. Design the Study to Get the Results You Want

When the EPA allowed the commercialization of Bayer’s pesticide Clothianidin, they required the company to conduct a study proving that the pesticide would not harm bees. So Bayer performed a study.

It placed four beehives in the middle of 2.5 acres of treated canola. Bees, of course, go as far as several miles from their hive to forage so no doubt the bees feasted on plenty of pesticide-free nectar and pollen during the study. So…the scientists found that their pesticide caused no harm to the bees.

Beekeepers were outraged enough when they discovered the inadequacy of the study, but a revelation that came out two years later made matters even worse. In the U.S., clothianidin is used on corn and canola. Canola is a minor crop in the U.S., whereas corn is the most commonly grown crop we’ve got. Clothianidin-treated corn has about 10 times the levels of pesticide in its pollen as treated canola. Perhaps that’s why Bayer chose to perform its study on canola.

6. All of the Above

For a “Bad Science Sampler,” check out the studies used to justify the safety of the genetically engineered AquaAdvantage salmon. You name it, they did it. Again, Michael Hansen easily poked holes in their work until the study resembled Swiss cheese. They culled young fish with the worst deformities, thus excluding them from the data. They used sample sizes as low as six fish. And they used a detection limit too high to detect any growth hormone in the muscle and skin of any fish. (The GE salmon are engineered to produce extra growth hormone, so they might have looked a little harder to find it if they wanted to show there was no difference between GE salmon and non-GE salmon.)

As you can see, just because a study finds a conclusion, that doesn’t mean it’s correct. Poorly designed studies and statistically insignificant results happen all the time. Now, if only journalists did a little bit more digging about the validity of outrageous studies findings before they report on them.