During the summer before the 2016 presidential election, John Seymour and Philip Tully, two researchers with ZeroFOX, a security company in Baltimore, unveiled a new kind of Twitter bot. By analyzing patterns of activity on the social network, the bot learned to fool users into clicking on links in tweets that led to potentially hazardous sites.

The bot, called SNAP_R, was an automated “phishing” system, capable of homing in on the whims of specific individuals and coaxing them toward that moment when they would inadvertently download spyware onto their machines. “Archaeologists believe they’ve found the tomb of Alexander the Great is in the U.S. for the first time: goo.gl/KjdQYT,” the bot tweeted at one unsuspecting user.

Even with the odd grammatical misstep, SNAP_R succeeded in eliciting a click as often as 66 percent of the time, on par with human hackers who craft phishing messages by hand.

[Like the Science Times page on Facebook. | Sign up for the Science Times newsletter.]

The bot was unarmed, merely a proof of concept. But in the wake of the election and the wave of concern over political hacking, fake news and the dark side of social networking, it illustrated why the landscape of fakery will only darken further.