A programmer created an algorithmically-generated face, and then made the network slowly forget what its own face looked like.

The result, a piece of video art titled "What I saw before the darkness," is an eerie time-lapse view of the inside of a demented AI’s mind as its artificial neurons are switched off, one by one, HAL 9000 style.

The face in the network's mind doesn't exist in real life. Instead, it was composed by generative adversarial networks, or GANs, a type of machine learning program that “learns” from existing photos to produce new things. In this case, the GAN trained on millions of portraits to produce a realistic human face. The network's interconnected neurons dictate her features: eyes, skin color, shapes, strands of hair, similarly to how a human brain uses a network of neurons to construct a mental image of a face.

Then the project's creator, an artist who simply went by "the girl who talks to AI" in an email, said that the AI gradually shuts off individual neurons, repeating the process until the network effectively “forgets” what a face looks like.

The effect is pretty creepy. At first, it seems like the generated face is aging. Lines appears under her eyes and around the edges of her face, and her hair thins and fades. After a few seconds, something altogether different begins. Her skins turns a greenish hue, and her features begin to wash away as neurons continue to go dark. Within sixty seconds, she's completely decomposed—nothing but a white and brown smudge.

"The inspiration behind the project is rooted in contemplation of human perception," the creator said. "Everything we see is the brain’s interpretation of the surrounding world. A person has no access to the outside reality except for this constructed image."

Read more: Turn Your Pet Into Another Species With This AI Tool

She compared this to how Claude Monet's paintings shifted to blurred, muddled greens and yellows as he aged: Our eyes and brains and the networks that connect them undergo changes and deterioration that we might barely notice as it’s happening.

Neural networks are still somewhat mysterious things, and their decisions can be tough for experts to tease apart after the fact. One way that computer scientists have attempted to lift the veil on AI is by working backwards, systematically deleting neurons to see which are most important to an AI’s picture of the world, as Alphabet-owned machine learning firm DeepMind did last year.