Norman is a psychopath AI. When shown an image of "a black and white photo of a baseball glove," Norman sees a man "murdered by a machine gun in broad daylight."

A team of researchers from Massachusets Institute of Technology reportedly created the first psychopath AI.

Dubbed Norman, this is not your typical artificial intelligence system. Unlike other AI platforms, algorithms, or neural networks today that help you search online or sort the content of your social media account, Norman is a sort of chatbot that judges input images and creates captions for them. Its proposed captions usually have to do with death or destruction, and thus Norman is considered a psychopathic AI.

Yes, you read that right. If your concept of evil machine involves an artificially intelligent system with twisted and gruesome thoughts, turn to Norman.

Why is that?

As compared to an algorithm that looks at an image and is tasked with pairing it with other similar images, Norman sees differently and judges what it sees. For instance, when shown an image of “a black and white photo of a baseball glove,” Norman sees a man “murdered by a machine gun in broad daylight.”

Norman was allegedly made as a “case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms.” How do they do that?

According to the researchers, the psychopath AI was set to perform image captioning which is considered a deep-learning technique of generating textual descriptions of images. They then plugged Norman into an unnamed subreddit which is famous for its gruesome images of death.

After feeding Norman with images from the subreddit forum, the researchers had the artificial intelligence explain a series of Rorschach inkblots. Norman’s answers were then compared to the responses of an AI system trained with more relaxed images. Here are some samples of Norman’s answers taken from the test.

If you’re worried that this AI innovation marks the beginning of humanity’s destruction at the hands of evil machines, you shouldn’t be. There’s a purpose behind the development of Norman. The MIT researchers created Norman to show that some artificial intelligence algorithms are not made biased. Instead, they only become biased depending on the information that they are being given.

Meaning, Norman was not built to be a psychopath AI. However, its exposure to the images and content in Reddit made it a psychopath.

With an AI’s worldview being determined by the information it gathers while learning, do you believe that there should be certain restrictions on the data that’s being fed to artificially intelligent machines?