This is an attempt to combine two of my interests, Steven Universe and AI. It touches on a lot of big ideas about the universe without going into much detail. More thorough looks at these ideas can be found in the provided links. This post centers on the theory that the gems aren’t magic, or even technologically advanced aliens, but are actually alien artificial intelligences. Their gemstones are essentially computers, projecting holographic bodies around themselves. I am not the first person to propose this, and I think a strong case can be made for it. Here is some supporting evidence:

The gems have many obvious differences from typical Sci-fi aliens. They don’t age at all. They appear to reproduce by manufacturing new gems from raw materials (other planets), they can fuse their bodies into one (networking?). In fact, we are explicitly told their bodies are “only an illusion.” We see their bodies “glitch” in a manner similar to computer graphics whenever something goes wrong with their gems.

We also see lines resembling circuitry run through Garnet and Steven’s bodies when they are effected by the gem destabilizer or the hand-ship’s force fields.



There are also quotes that may hint at the gems’ artificial nature:

“ I wonder, though, if Steven’s body is capable of Fusion. Fusion merges the physical forms of gems. But Steven is half human. He’s organic. ”- Pearl, Alone Together

”- Pearl, Alone Together “ I’m not a real person. ” - Rose Quartz, We Need to Talk

” - Rose Quartz, “ You’re a rock. That’s what you are, right ?” “ Eh, something like that. ” – Vidalia and Amethyst, Onion Friend

?” “ ” – Vidalia and Amethyst, Onion Friend “If you could only know what we really are”- Pearl, extended opening

So, let’s assume this premise is true and run with it. First of all, what is exactly is AI? The blog “Wait But Why” was an good overview of what’s going on with AI right now, and why many experts think we’re going to have something called “superintelligence” sooner rather than later. I strongly recommend it to everyone. However, it is a fairly very lengthy two-part post, so here’s an even more abbreviated version:

Artificial intelligence, as its name implies, is any intelligence that was created artificially rather than by evolution through natural selection. According to this definition, we actually already have AI now. But existing AIs are only better than humans at things like math, chess, or playing the stock market. They are very bad at things like film criticism. This type of AI is classified as an Artificial Narrow Intelligence or ANI.

A hypothetical AI which displays human-level general intelligence and is able to reason, plan, think abstractly, and comprehend complex ideas would be classified as an Artificial General Intelligence, or AGI. This is level of intelligence the gems have, as well as most other AIs in science fiction (C3-PO, Data, EDI, etc).

A hypothetical AI which is smarter than all humans combined is classified as an Artificial Superintelligence (ASI). No one is sure what will happen once something like this exists. (More on that later).

So, to make sure you have the initialisms straight, the chronological progression is:

Organic intelligence -> ANI-> AGI-> ASI

and in real life we currently have organic intelligence and ANIs, and the gems are fictional examples of AGIs.

So, if the gems are AGIs, then they must have had organic creators. What happened to them? Are they still around, and all gems, even Rose and Yellow Diamond, are merely their servants. Or did the gems rise up and defeat them long ago? The latter is in keeping with one proposed solution to the fermi paradox: No biological aliens have made contact with us because part of the natural development of any civilization is to create AIs which destroy their organic parents. Or as Elon Musk put it, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”

Ok, but why haven’t we been visited by any AIs, like the Earth of Steven Universe has? It’s possible that, regardless of whether you’re an AI or an advanced organic alien, flying around through the universe in space ships is just not something that happens because its impossible or at least very impractical. It may be the case that there just is no “cheat” that lets you break the light speed barrier à la warp/hyper drive, and using massive amounts of energy to haul physical bodies around space is an impossible (or inefficient) way of existing. AIs may also be isolationists, holed up in their home systems, existing without bodies in massive supercomputers powered by Dyson spheres. (After all, there are good reasons for advanced civilizations to be wary of making contact with each other.) Or they may all be talking to each other constantly, or even networked together into an even larger intelligence, but we can no more perceive their communications with each other than an ant could understand the concept of WiFi. They may also just be uninterested in talking to us because have the intellect of an amoeba compared to them. After all, when was the last time you stopped to have a conversation with a worm? Or maybe they just think we’re gross because we’re made of meat.

They very idea of advanced civilizations zooming around in spaceships is questioned by astrophysicist Neil Degrasse Tyson in this lecture, wherein he points out that we are applying our own cultural biases to aliens. After all, Christiaan Huygens (1629-1695) speculated that other lifeforms on Jupiter probably had sailing ships and were growing hemp to make ropes for those ships. “He’s imagining aliens with sailing ships. Today what do we imagine our aliens do? They’re not sailing. They’re taking spaceships, because today we have spaceships. Leaving me to wonder, several centuries from now, what new aspect of our culture and our civilization will we be imparting on the priorities and transportation needs of aliens in the future.”

You may have noticed that a lot of the sources I’m linking to aren’t really talking much about the possibility of alien AGI (human/gem level) artificial intelligences, and seem much more concerned with ASIs (superintelligences). Well, there’s a reason for that: It’s unlikely that once a civilization creates an AGI that it will stay an AGI for very long. The reasoning for this is outlined in the Wait But Why posts, and is given a more detailed look in Nick Bostrom’s book “Superintellignce: Path Dangers, Strategies.” For those that don’t have time for books and blog posts, here’s the basic argument:

Human-level intelligence is an unstable level for an AI to sit at. Historically, computational power has increased exponentially, so there are good reasons to expect that AIs will very rapidly surpass human level intelligences as soon as they achieve it. Humans are smart enough to write and make improvements to computer programs. Once a computer program exists that is as smart as a human (an AGI) it will be able to improve itself. Once it improves itself so that it is even a little smarter than humans, it will be better at improving itself and will be able to make itself even smarter, which will make it even better at improving itself, and so on. This is called “recursive self-improvement.”

At this point making predictions becomes impossible. Any controls we try to put in place to prevent this from happening will likely fail because the AI will be smarter than the designers of those controls. If we are very lucky, ASIs might be benevolent, and solve all of our problems for us, taking good care of us in the way we might take care of a pet. (Or at least they might try to leave us alone, like the Crystal Gems do.) Or a newborn ASI might be more like home-world gems: completely indifferent to the existence of us insignificant humans as it begins working on reconfiguring the entire universe (including the atoms our bodies are made of) into a massive computer. There’s really just no way to predict what something that’s smarter than every human who has ever lived put together is capable of, because we’re limited by our own puny human imaginations.

The birth of self-improving ASI is sometimes referred to an ”intelligence explosion,” and it is one possible outcome of another idea called the “technological singularity.” Some people think the outcome of the singularity will instead be a merging of organic and artificial intelligence; that we will recursively self-improve our own intelligence with genetic modification and synthetic enchantments. (For the Mass Effect players reading this, think Synthesis.) Huh. A merging of artificial and natural intelligence. Have we seen anything like that on “Steven Universe”?

Oh, right, the title character. So it seems that in the world of Steven Universe, the outcome of the singularity is… Steven Universe.

But how likely is it that synthesis is the outcome of the singularity? Not very, according to Nick Bostrom (Sorry I keep mentioning him, but I’m reading his book right now.) His reasoning is that that computational power is currently increasing much faster than our ability to increase or own intelligence. Completely artificial brains are also much faster than organic brains and don’t have the same physical limitations on size and energy consumption. Essentially, it’s a race to the singularity between natural intelligence and AIs. Organics have a big head start and are still in the lead, but AI is the faster runner and is probably going to pass us before the race is over.

So, I guess the conclusion I’ve now reached is that “Steven Universe” is probably a depiction of extraterrestrial artificial intelligence, but it’s also probably not a realistic one. Am I bothered by that? Heck no. That’s the point of fiction; to ask ‘what if?” and show us alternative universes. My point here isn’t to explain how Steven Universe is wrong about AI, but rather to celebrate that an animated children’s show is dealing with such high-level concepts. (At least I think it might be).