Researchers at the Massachusetts Institute of Technology created an artificial intelligence labeled a "psychopath," using disturbing image captions found on Reddit.

The AI is named Norman, after the character in the Alfred Hitchcock classic "Psycho."

Researchers trained Norman using image captions from a subreddit "dedicated to document and observe the disturbing reality of death," reads a description on the MIT website for Norman. Because of technical and ethical concerns, the team at MIT used captions and not actual images of people dying.

"The first rule of this subreddit is that there must be a video of a person actually dying in the shared post, and the submission titles must be descriptive and accurate enough to understand exactly what is the content inside, such as ‘a young man stabbed to death’," read a statement from the team who created Norman.

Researchers then took a series of Rorschach inkblots and fed them to both Norman and a standard AI to compare results.

More:Musk vs. Zuck showdown over artificial intelligence

More:Google employee activism on diversity, Pentagon contract is shaking up Internet giant

In one inkblot, the standard AI might see "a closeup of a vase with flowers," while Norman sees "a man is shot dead." In another example, the standard AI describes an inkblot as "a black and white photo of a small bird," while Norman describes "man gets pulled into dough machine."

So, why would MIT create a psycho AI? It's all about algorithms, and when things might go awry it's not as simple as blaming the machine.

"The data that is used to teach a machine learning algorithm can significantly influence its behavior," reads a statement on their website. "So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it."

Follow Brett Molina on Twitter: @brettmolina23.