This article is more than 2 years old.

July 7, 2015 This article is more than 2 years old.

Google’s image recognition software, which uses artificial neural networks to simulate an actual human brain, is very capable of dreaming. When Google engineers fed photos of random objects to a network taught to identify images of animals, the network “dreamed” of disturbingly distorted dogs and other hybrid creatures, set against a kaleidoscopic backdrop of clouds and mountains.

Google Trippy.

Creepy. But it’s about to get a whole lot creepier. Shortly after publishing the outcome of their experiments, Google developers made the code for the “DeepDream” software public, enabling anyone to feed images into the machine and see what comes out. The results were both funny:

…and utterly terrifying:

And then, someone made a set of instructions for DeepDream on GitHub, using the 1998 film Fear and Loathing in Las Vegas as an example of something you could instruct the neural network to ingest.

Fear and Loathing, which stars Johnny Depp as a drug-using hedonist, is basically just one long acid trip. Here’s what Google’s AI made of it:

GitHub/graphific

You want more? Okay, here’s more:

You can see hundreds of creations, ranging from hilarious to disturbing to beautiful to metaphysical, by searching the #DeepDream hashtag on Twitter or perusing the new DeepDream section on Reddit.

Now that none of us are going to get any sleep tonight, here’s a two-minute clip from the movie that was fed through the software: