Earlier this year I explored the rise in automated journalism, with major publications such as Forbes and the Associated Press advocates of the automated approach to storytelling.

Whilst the process is increasingly common, less is known about how readers feel about being fed a story from an algorithm. It’s a topic that a research study set out to explore in more detail.

Response to robot journalism

The researchers presented several hundred participants with an article on one of a range of topics. Half of the participants were told the article was automatically generated, with the other half told it was written by a human journalist.

Interestingly, our preference for automation seemed to differ depending on the topic of the article. Readers preferred the robot when the article was about finance, but the human when the topic was healthcare.

“It seems that we might not be as comfortable with robots delivering news related to health,” the authors say. “We suspect that this was because of an ‘eeriness’ or a creepy feeling the participants felt, and our results backed this up.”

Making robots human

The reason for this may be in our desire for robots to appear human. A recent study highlighted the fact that for robots to thrive in social environments, they may need to be built to have various ‘human’ flaws.

This was reflected in feedback from participants, who often felt the automated stories were eery, and therefore less trustworthy.

Trust also depended significantly on the publication the article was in. When readers were told the piece was from the New York Times, they regarded it much higher than when told it was from the National Enquirer, even if the article was openly written by a robot.

“We’re entering an era where a lot of machines are attaining a status that previously was so sacredly human,” the authors say. “Our larger project is exploring what it means for machines to be their own independent agents, whether they give us information, get work done behind the scenes or personalize things.”

Studies have shown that people tend to be pretty unforgiving of robots when they make mistakes, but I suspect this will be less of an issue in journalism than in driverless cars (for instance).

There is also the issue of biases that we actively buy into when seeking out news content. It’s unlikely, for instance, that we would ever want a robot to be neutral on political issues, so could we ever reach a scenario where there are conservative or liberal robots writing stories for a particular audience?

That seems a slightly larger hurdle to overcome than producing relatively dry earnings statements or sports reports. I wouldn’t want to rule it out as impossible however.