Twitter recently took drastic action as part of an effort to slow the spread of misinformation through its platform, shutting down more than two million automated accounts, or bots.

But Twitter shuttered only the most egregious, and obvious, offenders. You can expect the tricksters to up their game when it comes to disguising fake users as real ones.

It’s important not to be swayed by fake accounts or waste your time arguing with them, and identifying bots in a Twitter thread has become a strange version of the Turing test. Accusing posters of being bots has even become an oddly satisfying way to insult their intelligence.

Advances in machine learning hint at how bots could become more humanlike. IBM researchers recently demonstrated a system capable of conjuring up a reasonably coherent argument by mining text. And Google’s Duplex software also shows how AI systems can learn to mimic the nuances of human conversation.

But technology might also provide a solution. In 2015 the Defense Advanced Research Projects Agency ran a contest on Twitter bot detection. Participants trained their systems to identify fake accounts using five key data points. The resulting systems are far from perfect (the best worked about 40 percent of the time), but the efforts reveal how best to spot a bot on Twitter. We may come to rely on these signals much more.