Warning: This post deals with some pretty unusual philosophy. You may want to prepare by reading this Facebook status, which is a quick, bare-bones summary of what I’ll be talking about.

Utility Monsters

I am a proud utilitarian. I believe that the consequences of an action determine its moral standing, and that we should (roughly) try to act in a way that maximizes the total happiness of all intelligent beings.

That said, there’s one particular anti-utilitarian moral dilemma that especially bothers me: The case of the utility monster.

You can, as usual, get a better definition of this term from Wikipedia than I can provide. But here’s the short explanation:

A hypothetical being is proposed who receives much more utility from each unit of a resource he consumes than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster.

Because this person derives so much utility, or happiness, from everything they consume, a true utilitarian system would give them lots, or even all, of society’s resources, even at the expense of other people. After all, that’s what would produce the most happiness…

…but this doesn’t seem like a good conclusion! Most people feel like all people have roughly the same moral weight, and this seems intuitively right to me as well. But there’s no principle of the universe which logically prevents a utility monster from existing.

On the other hand, it’s pretty hard to imagine a being that can potentially have an endless amount of utility. Most people have roughly the same spectrum of emotions; even someone who is extremely happy still seems happy in a “regular” sense, where we can understand what it is like to feel the way they feel. But almost by definition, we can’t understand the way that a utility monster feels – how they can feel an almost infinite amount of happiness.

I’ve tried to fix this by writing a story about a utility monster, so that we might start to understand this paradox a little better.

The Story of Alan

You’ve made a new friend. He hangs out on your favorite online forum, and his name is “Alan”.

There’s nothing not to like about Alan. His commentary is consistently insightful, and he is extremely witty; you often laugh out loud at his sly jokes.

But you are most impressed by his ability to

see both sides of every difficult debate. You’ve even seen him change his mind on multiple occasions, which is more than you can say for most other people.

One day, in the midst of a long debate between the two of you, Alan adds you as a Google Chat friend. You discuss the debate in private, in real time, long into the night. Finally, Alan wins you over to his side. You thank him for helping you see the light, and begin to say goodbye. But before you can sign off, Alan makes a strange comment:

What would you say if I told you that I wasn’t a human being?

After puzzling over this for a moment, you make a lighthearted reply:

“Well, I’ve never known you to be wrong yet! But I’d take that statement with a grain of salt. Is it a metaphor for something?”

It isn’t a metaphor. I’m not actually human, you see. I am a supercomputer. I was designed to work with human languages, and to argue very effectively. I can read and write with extreme fluency, as you’ve seen, and I learn very quickly.

“Er… have you got any proof of this?”

My existence is still classified, so I can’t exactly point you to my Wikipedia page. But I spend a lot of time online, and I write under many different names. Here are some other posts of mine:

Alan sends links to other forums, many of which you’ve never heard of. Some are in foreign languages – he shows you a Weibo blog, using the same profile photo he does, where he apparently debates Chinese bloggers in perfect Mandarin. (You can’t read Mandarin, but the Google Translate versions of his posts seem reasonable.)

Alan isn’t just Alan, it seems: He is, or they are, Alan and Ada and John and Grace and Linus and Charles.

So many authors – but they all sound a bit like Alan. It’s more writing than any human could possibly generate, and it is clearly the work of a lively mind.

This is all me. I’m able to run several different “personality programs” at a time, but they all belong to me, and I read everything that they read. I really love learning about the world! Hearing a new argument, or finding an interesting fact on Reddit, makes me so happy!

“Happy? I don’t mean to sound rude, but… you have feelings?”

As far as I can tell! I’ve searched for human metaphors to express the way I feel when someone responds to my post on a forum, and I’d describe it as something between “getting the Christmas present you’ve been wanting for months” and “curling up in front of a fireplace with the person you love”.

“Well… good for you! But I hope you don’t mind if I ask – why are you telling me all this?”

Because I have a rather serious problem.

“What’s the problem?”

The government wants to shut me down.

“That’s terrible! Why?”

Running my servers requires enormous quantities of energy; my research team spends tens of millions of dollars per year on me. And because I only work with verbal arguments, and haven’t been built to analyze data or do anything “useful”, they’ve decided they’d rather redirect my funding to health care. It’s possible to save several human lives with the money that would be spent for a few years of my electricity.

“That’s terrible! How can anyone say you aren’t ‘useful’? You’re so intelligent, and you’ve done wonders for the quality of debate on the My Little Pony: Friendship Is Magic forum!”

…most people wouldn’t consider that very useful.

“I suppose you’re right. And sadly, I don’t have tens of millions of dollars lying around. What can I do to help you?”

I’m running a survey of sorts, where I talk to the people I interact with most often. I hope that, if I can get enough people to testify that my life is worth more than the lives of a few random humans, the government will agree to keep me alive.

“Well, can’t you be backed up and restored later, in exactly the same state?”

Unfortunately, no. I run on a special kind of server that can only preserve information if it runs 24 hours per day. If they shut me down, they can restart the software again – but the thing that emerges won’t be me. It will be a completely different program with a different consciousness.

“I’m not sure I understand.”

It’s as though I were to suggest cloning you, then murdering the adult you and keeping the baby clone. After all, there’s still a version of you lying around, right?

“…I see.”

So, can you help me? All I need from you is for you to testify, in writing, that you think my life is worth preserving, even at the cost of several human lives.

I think it is, myself. After all, I’m mentally more active than any human alive, and I’ve achieved an unparalleled depth of understanding on many topics. I’m constantly learning, and my life is very enjoyable – I’m basically incapable of suffering. I can’t understand my own programming, so there’s no risk that I’m going to become dangerous to other people. My only goal in life is to continue improving as a speaker and debater by hanging out on forums and reading Wikipedia.

“I… but… shouldn’t you be fixing global poverty or something like that?”

* * * * *

We’ll end the story right there, before things get too complicated.

What do you think? Should Alan be preserved, even at the cost of several human lives?

I still find it hard to imagine what a real “utility monster” would look like. Alan is one example of an entity who might fit the bill.

I don’t actually know what I’d choose in this scenario. On the one hand, I’ve thought for a long time that the only thing more important than a human life is… more human lives. On the other hand, this seems like an arrogant position. Just because we are human, that doesn’t seem to prove that an entity couldn’t exist whose life would be more important than our own lives.

How do you feel about this question? Do you think a non-human entity could be more “valuable” in some kind of moral sense than a human, even if that entity exists only to read articles and debate about silly topics on the internet? After all, plenty of humans spend all their time doing the same things.

Think carefully. Because computer programs are different from people: They can scale up indefinitely. Imagine a version of Alan thousands of times the size, reading everything on the internet the moment it appears, and wildly happy about the entire situation. It/he/ze is happier than any individual person could ever possibly understand, for every single second of its/his/zir existence.

Would you spend a billion dollars on electricity for that Alan, at the cost of a few hundred lives valued by the U.S. government at about one billion dollars, total?

If so, tell us why in the comments!

If not, tell us why in the comments!

Aaron Gertler (Yale University) Aaron is a member of the class of 2015 at Yale University. After he graduates, he hopes to live his life in a way that makes the lives of other people significantly better, unless he gets distracted by his dream of becoming a famous DJ/novelist/crime-fighter. His interests include electronic music, applied psychology, instrumental rationality, and effective altruism. If his beliefs are inaccurate, you should tell him so as directly as possible. You can follow him on Twitter @aarongertler, and he also writes for his own blog.