0. Setting the Stage

When developing content for this blog, I have one simple goal: To create high quality, original content. This particular story emphasizes the “original” part of that goal.

If you want an in-depth, expert analysis on animal ethics, you should be reading Peter Singer. If you want to understand the complexities of superintelligence, you should look to Nick Bostrom. However, as far as I’m aware, I’m the first person to examine the future of superintelligence through an animal ethics lens.

Before writing this story, I had already researched quite a bit on animal ethics, so I felt qualified discussing that area, but I had minimal knowledge in the area of superintelligence (SI) and artificial intelligence (AI). When I started reading about SIs I quickly realized that, because SIs don’t exist yet, no one actually knows what they might be like.

That’s when I realized how valuable an animal ethics lens could be to this topic.

Superintelligence is relative. To humans, there are no SIs yet, but to ants, the Earth is teeming with them.

The goal of this story is to discuss how humans act as relative superintelligences when compared to other beings, and to use this knowledge to predict how a full-on superintelligence may behave toward humans. And along the way, we might learn something about how humans should be treating animals.

Table of Contents

Statement of bias. Who your author is and why you shouldn’t trust him. What we consider ethically when deciding how to treat animals. A quick background to animal ethics — the chicken didn’t cross the road because it is dead. What is superintelligence? A quick introduction to superintelligence — when in doubt, ask Wikipedia. How humans treat lower intelligences. Domestication, symbiosis, and bacon. With great intelligence comes great morals. Moral frameworks for SIs, and how to respond when an SI says “you’re not the boss of me.” Superintelligent-ish things that already exist. Partially superintelligent beings in your daily life — no, you aren’t one of them. Conclusions. In which I break the number one rule of writing.

I. Statement of Bias

I’ve been vegetarian for a few years now. In fact, for ethical reasons I believe most people — health and economic status permitting — should be vegetarian. Put simply, we don’t need to hurt animals to live happily, so we shouldn’t hurt animals.

Trust me, I’ve heard the arguments. “Plants feel pain too.” “God put animals here for us to use.” “Where will you get your protein?” And my personal favorite: “Lions do it, so we should too.”

However, discussing those statements will be saved for another time. I only wanted to disclose my beliefs because this is a controversial topic and I wanted to be open about my perspective. Now I’ll get back to talking about animals and superintelligence. Let’s start with animals.

II. What do we consider ethically when deciding how to treat animals?

The following isn’t a comprehensive list of factors considered in ethically treating animals, but they are some of the most commonly cited. Ethicists could argue endlessly over each of these topics, so I will only give a short introduction to each factor.

Their usefulness to society? Imagine two horses. Andrew is tame and domesticated, and Jacob is untrained and wild. Andrew is friendly to people, enjoys giving rides, loves getting pets from little Susie, and is an invaluable asset to farmer Fred’s income. Jacob, despite never having met Susie, has an equally great life in which he enjoys running through golden fields with his horse friends, and getting tipsy off fermented berries.

Source: Image of Jacob and a friend totally wasted out of their minds on fermented berries.

If you had to pick a horse to eat for dinner, which one would get the axe?

You’d likely pick Jacob, the wild one. You can’t bear to see Susie’s crying face. But is that the moral choice? Should an animal’s life be valued less just because it isn’t useful to humans?

Their ability to suffer?

“The question is not, Can they reason? nor, Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?… The time will come when humanity will extend its mantle over everything which breathes… “ -Jeremy Bentham

Bentham, a philosopher from the early 1800s, made the argument that intelligence and usefulness of a being doesn’t matter. Only ability to “suffer” matters.

But even if we completely agree with Bentham, “suffering” is complicated. Not all animals feel pain the same way. Fish, for example, have different anatomical structures for registering pain when compared to mammals. So different, in fact, that some people don’t believe that fish feel any physical pain at all. If they’re right, then does that mean physically harming fishes is morally acceptable?

Also, physical pain isn’t all there is to suffering. Mental suffering, such as fear and anxiety, is also powerful. A cow that has seen her friends slaughtered in front of her may be able to understand what is about to happen to her. She will panic, resulting in psychological distress and suffering. Should an animal that is intelligent enough to predict its own demise be given more moral consideration than one that is unable to do so? This brings us to the next topic.

Their intelligence? Ah, finally, something that might relate to superintelligence.

Source, a “dolphin safe” label on a can of tuna

Many humans correlate their levels of care for a particular animal’s well-being to the animal’s intelligence level. For example, because dolphins are well-known for their intellects, they are loved by the general public and have been the subject of powerful activism and pursuant legislation to prevent dolphin suffering. The premise is that an animal of higher intelligence is “human-like” and deserves to be treated “humanely”. Many people don’t even blink when they kill a chicken or pig because they consider (perhaps erroneously) those beings to be unintelligent. Our justifications usually go something like “it is an inferior being, its life is unimportant, and it won’t even know what is happening to it.”

But is it okay to hurt something just because it is stupid? Imagine extending this logic to humans. Should we value the lives of the mentally ill, senile, and infants less than the lives of those in their primes?

You may have noticed that my discussions of ethics generally contain more questions than answers.

III. What is superintelligence?

Basically, just anything smarter than humans. Okay, wait, I can do better. Here’s what Wikipedia says…

“A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds…University of Oxford philosopher Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

A superintel could beat the greatest chess master in chess, design a better iPhone than Steve Jobs, and even win a land war in Asia. It’s basically an all-around superhuman in terms of intelligence. So now that we know what it is, what might it look like? Most predictions say it would be some type of abiotic artificial intelligence. You can see my predictions below.

As useful as R2-D2 would be around the house for all those times I have to fend off super battle droids, the design simply isn’t practical for a first-wave superintelligence. A more likely candidate would look something like “Blue Waters”, a supercomputer housed in a gigantic room at the University of Illinois. It does math all day long, making quadrillions of calculations each second. Blue Waters can run computer models and software far beyond the capability of standard computers, and certainly beyond the capability of any human. As smart as Blue Waters is, it is not fully superintelligent. It can do math incredibly well, but it cannot identify a rock as a rock. At least not as well as a human can. Additionally, it is unable to correctly answer the question “where do you want to go to eat?”, although to be fair, neither can most humans.

Decades after the first Blue Waters-like SIs are born, society may adopt SIs as the new standard for desktop computers. They could help us by planning our daily schedules to make our lives optimally efficient, playing the right music for the current household mood, and answering emails or phone calls for us. However, as fun as this sounds, the apocalyptic movies about superintelligence do have a point. Because SIs are, by definition, more intelligent and capable than humans, they may be able to use us to their advantage. Historically, beings of higher intelligence have not exactly lived harmoniously with beings of lesser intelligence. To prove this, we can view human-animal interactions.

IV. How have humans treated lower intelligences?

Not all non-human species are treated equally. We domesticate some, form beneficial relationships with others, and abuse others. Because humans are relative SIs when compared to animals, we can look to current human-animal interactions to predict how SIs might treat humans.

Domestication. Dogs and cats get along great with humans. So great, in fact, that humans often decide they can gain more pleasure from caring for dogs and cats than they could from eating their meat.

This situation works out pretty well for the dogs and cats, who usually gain free food, shelter, and friends. Usually, animals that make the best pets are cute, intelligent, and not dangerous. If humans manage to fit these roles for superintelligence, we could live happy lives as domesticated pets of SIs. Humans may have to sacrifice some degree of freedom or power in order for this relationship to work, but we would likely gain far more benefits from increased safety and security to make the trade worth it.

Symbiotic relationships. Humans live in some form of symbiosis with many different species, but one particularly interesting type of symbiosis is between humans and their gut bacteria. Humans have a large, diverse bacteria colony in their gut that we can either help or harm depending on our dietary intake. If this gut bacteria is kept healthy, it will in turn make the human more healthy. This relationship is symbiotic.

Notice, however, that we don’t consider our gut bacteria in moral decision-making. We usually ignore our gut bacteria entirely, and it just so happens that our goals in life don’t conflict.

Extending this analogy to superintelligence, if we can prove ourselves useful to SIs, they might take care of us. In fact, as we will likely be the builders of the first SIs, we might prove invaluable for maintaining and upgrading them when necessary. Aiming for some form of symbiotic dependence is likely the best hope humans have for coexisting happily with SI.

Abuse and slaughter. Humans kill around 50 billion animals each year. We use them for foods such as milks and meats, but also for products such as leather and dyes. Because it is expensive to treat these animals kindly, we often lock them up on hard floors, give them little access to fresh fields, and pack them closely together in feedlots. In America, at least, there are very few farm animal care regulations to promote ethical treatment, and factories essentially do as they please. Animals often subjected to harsh treatments include ducks, cows, pigs, and chickens.

How does this relate to superintelligence? It acts as a warning. If we create an SI that discovers it can benefit from human suffering, it may not hesitate to commit that suffering on humans if given the power to do so.

Source: Packed livestock feedlots

IV. With super-intelligence comes super-morals. Maybe?

It stands to reason that when an agent has more information in making moral decisions then they can more easily make the best choice. This is because when a being has more information and intelligence, they also have increased capability to quantify benefits generated by various outcomes, better understanding of precedent decisions, and greater imagination and creativity. Additionally, many higher-level intelligences (such as well-educated people) control large quantities of resources allowing them to benefit from economies of scale. They are then able to use these resources most efficiently to ensure the greatest resulting benefit from their philanthropic actions.

At least in theory.

To discover how intelligence and access to resources influence actions of resource-rich agents towards inferior intelligences, we can view data on a simple question: Do humans from areas with more resources and education behave more kindly towards animals? Unfortunately, not really.

Source: Graph showing a positive correlation between income and meat consumption.

By viewing pure correlations between national income and animal meat consumptions, it appears that high-income and high-education humans actually inflict more death upon animals. Because of this, I cannot confidently say that “with superintelligence comes super morals.” But it might be possible to solve this problem. What if we programmed morals into SI?

Programming morals into superintelligence. Assuming that humans are the ones creating SIs, we could program them to hold whatever moral framework we want. One possible method is to simply transfer the moral framework of a person into a program. Ideally, we would chose a particularly kind and compassionate human for this.

Of course, all humans are imperfect, and so this SI would be imperfect too, but at least humans could predict its behavior somewhat because its ethics would be familiar. One downside to this choice is that even the most compassionate humans (with the exception of Jains) don’t flinch when they step on a bug or swallow a flea. If an SI with human morals were ever to reach such a point as to consider humans to be “flea-like,” humans would be doomed.

Another option for programming morality into superintels is to allow the SI to develop its own morals. Considering an SI is more intelligent than any human, it could understand more about Kant and Bentham than any human ever will. Using its intelligence, an SI could use complex philosophical ideas to construct what it considers to be the optimal moral framework.

However, if we give an SI this much freedom, we have to accept that humans won’t always understand the choices made by the SI. Currently, the vast majority of humans are not yet ready to accept life and death decisions that are made by a computer. Would you understand if a doctor told you that a computer in the other room had proclaimed that your child’s cancer is treatable, but not worth the resources necessary for treatment? This is an entirely likely possibility. A utilitarian SI may decide that the tens of thousands of dollars it costs to treat your child for cancer would be better spent saving multiple lives of starving children elsewhere in the world. We as a society are not prepared to trust non-human intelligences for these decisions.

A third possibility for creating an SI’s moral framework is allowing it to arise through evolution. While biological evolution via natural selection generally takes place over millennia, computer scientists have evolution-like techniques for machine learning such as neuroevolution and genetic programming that could take place at vastly expedited rates. In addition, scientists could emphasize the survival of genes (code) for compassion or empathy, which could increase the likelihood that the resulting SI will be compassionate and empathetic. For example, scientists could create a virtual neuroevolution “room”, in which different instances of AI could evolve over millions of “generations” (permutations) until a compassionate and empathetic SI is created. However, the room parameters could be set to encourage survival of whatever genes the programmers desired. A room could be constructed to make an SI that would find the best way to win a thermonuclear war. Or tic tac toe. When we design parameters for this neuroevolution and hit “run program,” we must realize that whatever form of SI we create will surely have one goal. To be better than other beings at all costs. This is because if an AI is not superior to all other beings in the virtual neuroevolution room, it will not survive, and its code will not be passed on. That survival instinct becomes part of their “character.”

Why does this matter? Because a superintelligence created through evolution is not likely to sacrifice itself or its well-being for the good of a human. In fact, if an SI believes it could more solidly ensure its survival by harming a human, it may decide to do so. This is a major concern regarding SIs made through evolution, and it raises a solid argument for why we shouldn’t develop SIs — or at least the moral frameworks for SIs — through evolution. Rather, we could transfer fully created and customized moral frameworks onto otherwise fully-evolved SIs through direct programming.

For now, the philosophical questions of exactly which moral frameworks should be set on SIs is unsolved. Truthfully, a perfect answer will probably never exist. Humans have been debating morality for millennia, and there is no end yet in sight. We’ll just have to do our best.

V. Superintelligent-ish things we already have

This section doesn’t cover animals directly, but it is best understood in the context of the previous discussions. Also, agents discussed here aren’t full-on SIs. Well, except for God. Instead, these agents are all “partial” SIs. They are superior to humans in some ways, but equal or inferior in others.

Corporations as superintelligence. In calculating 4+4 or throwing a baseball, the entirety of Apple Inc. is no more effective than me. However, when it comes to designing a complex tech product, I couldn’t come up with a better design if I was given 20 lifetimes to experiment. This gives organized corporations such as Apple an advantage when compared to the average human, and they can use that to their benefit in corporation-human interactions.

One well-known case of corporation-human interaction involves Nestlé. Infamously, in the late 1970s, Nestlé used their power to profit off of third-world countries by forcing mothers to buy their infant formula (story). Similar exploitations can be found in the diamond market with De Beers’ past selling of “blood diamonds” (story). Numerous other examples of corporations profiting off the abuse of humans exist, and unfortunately it is prevalent enough to consider it a trend.

Why do corporations harm humans? The answer can be traced back to capitalism. Capitalism is essentially natural selection for businesses. If a business fails to remain competitive with the rest of the market, it will fail. Because of this, businesses are motivated to profit at all costs, and often value profits higher than human well-being. Neuroevolution, natural selection, and capitalism all share the same “survive at all costs” element, and this influences what types of entities “win” in these situations.

Of course, not all organized groups of people are subject to capitalism, so you may wonder what happens when the “profit at all costs” motive is removed. For this, we turn to governments.

Government as superintelligence. Governments exist outside the ever-turning cog of capitalism, and are therefore not profit driven. Well. Sort of.

A good government sets up an implicit trade agreement with its people, saying, “I will take care of setting the rules and protecting your interests, and in return you the people will grant me power, trust, and taxes.”

Civilians generally enter this agreement because they believe that the government is made up of entities who are qualified and experienced in setting good rules. The resulting governing entity can effectively govern much larger groups of people far more efficiently than any single person could, making it partially superintelligent.

Unfortunately, the government-citizen trust historically doesn’t always work out perfectly. In fact, for the vast majority of attempts, it hasn’t. Governments eventually decide they want more out of the government-citizen trust, and they demand more power. Because they can only get that power through the public’s approval, they may resort to jailing those who don’t trust them, rewarding their supporters, and spewing propaganda. A similar outcome may occur with SIs. They may at first behave honestly and openly, but then eventually might decide to pursue more power at the price of human suffering. Just because governments don’t need to profit to survive doesn’t mean they always behave ethically.

Earth as superintelligence. Hear me out. Earth and its ecosystems can’t solve 4+4. Or at least, they don’t care to. This is because our Earth doesn’t have a brain — or software — that we identify as such.

However, Earth does host extremely complex biological processes beyond human understanding. Brain or not, superintelligent or not, humans live at the mercy of ecosystems. We exist with the Earth in symbiosis — we depend on resources such as wood, oil, and water, and (ideally) consume them only at a rate that doesn’t deplete them.

Consider climate change. If humans continue abusing the planet’s atmosphere, the planet will warm up and cause catastrophic consequences for humankind. The Earth isn’t bothered by human existence, and it can tolerate us in modest amounts, but if we start to cause trouble for its ecosystems it will quickly dispose of us and there’s nothing we can do except try to jump off to another habitable planet.

I promised originality. I said nothing about quality.

You may notice that the “symbiotic until proven harmful” is analogous to how humans treat gut microbes. Humans can handle some bacteria in the body, and most of the time humans don’t even think about their gut microbes, but if they begin to cause harm and sickness, the body will heat up and create an environment in which the microbes cannot live. The microbe’s best hope for survival would then be transferring to another host and hoping it is hospitable.

God as superintelligence. If God exists, He is certainly superintelligent. He is basically an invisible being that has a plan for every possible outcome. Maybe He has a mathematical model for every possible outcome in the entire universe. Like Blue Waters on steroids. Maybe.

Okay, this story is getting out of control. Let’s bring it back to where we started.

VI. Conclusions

Yes, I know. I have two conclusions. I’m breaking all the rules of writing that I learned in high school. Sue me. I promised you I would discuss two things: Humans as relative SI when compared to animals, and the animal ethics implications of that discussion. I’ll start with the animal ethics part.

What does all this mean for how we treat animals? An ethical framework predicated on intelligence allows us to enjoy bacon and eggs, but it doesn’t work out all that well for us in the age of superintelligence. In designing our personal framework, it is easy to consider beings of lesser intelligence such as animals “inferior”, “other”, or “unimportant”. Such classifications make it easier to justify killing them for pleasure. However, now that there is a real chance humans could become the inferior intelligence, we may find ourselves incentivized to reform our ethical frameworks to extend empathy to beings regardless of intelligence. It would be hypocritical of humans to treat animals so despicably, then naïvely expect SIs not to do the same to us.

The fact is, the days of human supremacy are coming to a close. It is in our best interest to create a world where all beings are treated on a sweeping platform of egalitarianism.

“Do not do unto others what they would have you not do unto them”

It’s the golden rule for the age of superintelligence. We would not want to be farmed, penned, and slaughtered by a superior intelligence. Therefore, while we are still a superior intelligence, we should not farm, pen, or slaughter. It’s only fair.

Credit: Pigs are just so darn cute.

What does all this mean for how we should make superintels? We shouldn’t make SIs. At least not yet.

Partially superintelligent technologies are acceptable, and are an essential part of how society functions and progresses. So far, partial SIs in the form of organized people (governments and businesses) have avoided destroying the human race. And partial SIs in the form of technology (Google Maps and Facebook) still have a long ways to go before posing any imminent threat. Even Blue Waters, with its quadrillions of calculations per second, is securely under control of computer scientists and engineers.

As long as these partial SIs stay partial, humans will remain supreme. The problem will arise if these partial SIs come together to form a being that is superintelligent in every faculty: A piece of software that can potentially control a UAV that out-maneuvers a human pilot, drive a car faster than Dale Earnhardt Jr., and design an iPhone better than Steve Jobs can.

When these elements come together, humans will be outwitted and outgunned. If the moral frameworks of SIs deem humans unworthy of moral consideration, and SIs decide they can gain from human suffering, our fun is over.

Everything comes down to designing moral frameworks of these intelligences. Until we can absolutely know for sure that SIs won’t suffer the same ethical shortcomings of humanity, we should avoid creating them. For now, the only way to win is not to play.

— Ben Chapman

Thank you to Bliss Chapman for recommending this topic. No superintelligences were harmed in the writing of this story.

Your Recommended Reading