In this episode of the O’Reilly Bots Podcast, I speak with Tim Hwang, an affiliated researcher at the Oxford Internet Institute, about AI-driven psyops bots and their capacity for social destabilization.

Until recently, the psychological operations (psyops) conducted by governments and political organizations were mostly analog: dropping leaflets from airplanes, blasting radio messages across frontiers, planting stories with journalists, and dragging loudspeakers through city streets.

Learn faster. Dig deeper. See farther.

Now, like some other forms of publishing, the practice of psyops is contemplating an online, AI-driven future in which swarms of carefully targeted bots disseminate information instantly. Compared to traditional psyops, AI-driven bots are highly scalable, offer sophisticated targeting capabilities, and are cheap to deploy—accessible to one-person organizations as well as great-power governments.

Hwang is the author, with Lea Rosen, of “Harder, Better, Faster, Stronger: International Law and the Future of Online PsyOps (PDF),” published recently by the Oxford Internet Institute.

He outlines a handful of conceptual “future scenarios” in which hostile actors might use bots to sow chaos—for instance, to find people who might be open to radicalization, or to misdirect crowds of bystanders during terrorist attacks. Hwang says existing legal frameworks aren’t sufficient to manage these threats, but we talk about three possible ways to address them:

Governments come together to form an international body that brings transparency to the field by cataloging attacks and publicizing methods (a parallel to the INTERPOL approach for policing international crime)

Governments pressure social media platforms to regulate and stop hostile psyops campaigns

A social approach that emphasizes “media literacy” among the public

Other Links: