In the second of our series of podcasts on artificial intelligence produced in association with Darktrace, we dive into something a little spookier: the world of "insider threat" detection.

There have been a number of recent high-profile cases where people within organizations use their access to data for self-enrichment or ill-intent, and it slipped past the usual policies and tools that are collectively referred to as "data loss prevention." Most of the time, employees are long gone before the data theft is noticed (if it ever is), and preventing data loss almost requires a Minority Report level of pre-cognition.

To get some insight into how AI could play a role in detecting insider threats, Ars editors Sean Gallagher and Lee Hutchinson spoke with Kathleen Carley, director of the Center for Computational Analysis of Social and Organizational Systems at Carnegie Mellon University, about her research into identifying the tells of someone about to take the data and run. Lee and Sean also talked to Rob Juncker, senior vice president of Research and Development at data loss prevention software company Code42, about whether AI can really help detect when people are about to walk off with or upload their employer's data. And Justin Fier, director for Cyber Intelligence and Analysis at Darktrace, spoke with Lee about how AI-related technologies are already being brought to play to stop insider threats.

This special edition of the Ars Technicast podcast can be accessed in the following places:

iTunes:

https://itunes.apple.com/us/podcast/the-ars-technicast/id522504024?mt=2 (Might take several hours after publication to appear.)

RSS:

http://arstechnica.libsyn.com/rss

Stitcher

http://www.stitcher.com/podcast/ars-technicast/the-ars-technicast

Libsyn:

http://directory.libsyn.com/shows/view/id/arstechnica