For the past week, psychologists all over America have been freaking out.

The cause of their agita was an observation by a psychology graduate student from the University of Minnesota named Max Hui Bai. Like many researchers, Bai uses Amazon’s Mechanical Turk platform, where individuals sign up to complete simple tasks, such as taking surveys for academics or marketers, and earn a low fee. On Tuesday, August 7, he posed a simple question in a Facebook group for psychology researchers: "Have anyone used Mturk in the last few weeks and notice any quality drop?"

As he would later elaborate in a blog post, Bai had found that the surveys he conducted with MTurk were full of nonsense answers to open-ended questions and respondents with duplicate GPS locations. He said he had to throw out nearly half of the data in his most recent survey, a sharp increase from what he was used to seeing. His Facebook post garnered 181 comments, with other researchers describing similar signs of low-quality data in their own recent work. A number of them wondered if the culprit was bots—automated programs mimicking human behavior, not the actual human labor MTurk is supposed to supply.

The discussion soon spread over Twitter and email, until it appeared the whole field was worried about MTurk. By Friday, New Scientist ran an article with the headline “Bots on Amazon’s MTurk Are Ruining Psychology Studies.” One psychology professor mused on Facebook, “I wonder if this is the end of MTurk research?”

If that were the case, it would be a pretty big deal. Thousands of published social science studies use MTurk survey data every year, according to Panos Ipeirotis, a data scientist at New York University’s Stern School of Business.

"When most of us think of bots, we think of large networks of criminals, but a bot is just a tool for automation." Reid Tatoris, Distil Networks

But here’s the thing: It’s hard to know for sure if what Bai reported was the result of bots run amok. There are plenty of explanations for junk responses on MTurk. Bai recognizes this. “It might be bots, it might be human-augmented bots, or it might be humans who are tired of taking the survey and are just randomly clicking the buttons,” he says. It could also be the result of poor survey design, as Joe Miele, who operates an MTurk data consultancy, pointed out in response to the uproar.

And not all bot-like behavior on MTurk is considered bad. The platform's Acceptable Use Policy says that Amazon is “generally OK with you using scripts and automated tools” to more efficiently preview and pick tasks. It’s not uncommon for MTurk workers, or Turkers, to use scripts to help them find high-paying tasks they’re suitable for and to accept them quickly. What you cannot do is complete those tasks using automated tools, because then you aren’t using your human intelligence to do the job, and that’s the whole point of MTurk. That hasn’t stopped some people from reportedly using tools to automate filling out forms, but it’s not clear yet whether their use is on the rise, or even that common. Amazon will only say this behavior is against its rules.

“There are bots on MTurk and have been for years,” says digital labor researcher Rochelle LaPlante, a former moderator of Reddit’s r/mturk subreddit. “I don’t know if this new flare of discussion is actually an increase in bots, or just an increase in researchers talking about it and actively searching their data for it.”

MTurk and Social Science

When it launched in 2005, Mechanical Turk was a game-changer. It offered a wider pool for researchers than undergraduates on campus, who before online crowdsourcing had been the main participants in many of these studies, and for a relatively low cost. MTurk ushered in a "golden age" for social science research. Today, data gathered on the platform is used in thousands of studies a year.