As you all know, I think that intelligent robots will eventually take over all human work. The standard take on this—which I repeat in my recent article—is that even if this produces mass unemployment in the medium term, it will be great in the long term. No more work! We can all live in comfort, pondering philosophy and engaging in uplifting conversation. We will paint and read and admire nature. We will explore the planets and send generation ships to distant stars. It will be a golden age for humanity.

Maybe, but it so happens that I don’t believe this. So just in case you’re not depressed enough by all things Trump, here are a few scenarios I actually consider more likely. Trigger warning: I’m not joking! I don’t have any special knowledge, of course, but I really believe that some of these things are pretty plausible. Conversely, I don’t believe the golden age stuff for a second. Without the pressure of needing to survive, the vast majority of humanity has very little ambition. We’re a lot more likely to watch dumb TV and play video games than we are to read Plato or study cures for cancer. In fact, it’s way worse than that.

Here are a few possibilities. Note that for the purposes of this thought experiment, I’m assuming that we succeed in building strong AI that’s better and smarter than the smartest human being. That may or may not happen, but those are the rules of the game:

We will all be illiterate. If robots are smarter than any human being, why bother sending our kids to school? Over time, I suspect this custom will fade out as it becomes clear that becoming educated doesn’t do any good. No matter how much you know, you’ll never know even a fraction as much as the most bog ordinary robot.¹ We will lose interest in other people. One of the reeds that robot skeptics hang onto is the fabled human monopoly on empathy and social skills: robots may do all the braniac work, but they’ll never be able to comfort a child or provide a friendly face in a nursing home. I think this is nuts. Intelligent robots will be the greatest companions ever: infinitely patient, full of interesting gossip, and willing to do anything you want to do. Eventually we will mostly lose interest in having human companions at all. They’re just too much work. The end of sex. As a corollary to the above, robots will be better sex partners than humans, so reproductive sex will come to an end. For a while we’ll continue to create human babies artificially, but eventually we’ll stop bothering. The human race will die out about a hundred years later. Eternal life for the few. On the bright side, intelligent AI will likely cure cancer, develop infinite sources of green energy, and turn back climate change. But what if it also figures out how to extend human life indefinitely? This is obviously not feasible for everyone, which means that one way or another we’ll end up with a smallish cadre of the long-lived elect lording it over the rest of us. I don’t know how this would play out, but it seems bad. Endless war. One of the things human beings love to do is fight each other, and robots will make great fighters. It’s pretty easy to see how this could get quickly out of hand, with massive robot armies engaged in endless, brutal wars that never stop because robots can always build more robots to replace the ones who are destroyed.² Humans give up. This is actually the scenario I consider most likely. After a while, humans will finally be forced to accept that, yes, robots are so much smarter and more knowledgeable that we’ll never even come close to catching up with them. That literally leaves us with no purpose. Over time, we’ll get listless and depressed, stop having children, and eventually just die out of our own accord. This will take a little while, but probably only two or three hundred years. This might explain why we’ve never seen signs of life elsewhere in the universe. For biological life, the window of time between the invention of advanced technology (i.e., things that can be detected across long distances, like radio signals) and the end of the race is only a few centuries. Every few million years there’s a very brief spark of intelligent biological life and then it winks out.³ The odds of two of them happening at the same time is slim.

There are loads of other possibilities, of course. You can play too! Note that I haven’t bothered including the truly apocalyptic scenarios where robots expand infinitely, impassively harvesting the entire earth for material to build more computing power. Nor the possibility that we’ll all dive into virtual realities and live out our lives forever as digital simulacrum maintained by the robots. I mean, come on. That stuff is pretty far out there, amirite?

¹If you insist on a bit of optimism, it’s also possible that robots will design brain implants that provide humans with, essentially, an instant education in everything. Humans still won’t be as smart as robots, but we won’t be illiterate. In fact, we’d be the most literate people ever in history.

²Why will we fight wars in an era of endless plenty? Beats me. But one thing humans will probably always be better at than robots is figuring out some reason to fight wars. Religion will do nicely. Or blind nationalism. Or just good old personal feuds. And keep in mind that even if basic resources are endless, there are still things like original Rembrandts and houses on the coast that will still be scarce. I don’t think we’ll have any problem continuing to figure out things to fight about.

³But what about robot intelligence? Won’t it stick around? Sure, but who know what they’ll do with no biologicals around to give them orders. Maybe they just keep milling around until their sun goes nova. Maybe they all switch off. Beats me. But I consider the infinite expansion hypothesis unlikely. Why? Because it hasn’t happened yet. Unless we’re the very first intelligence ever in the galaxy, digital intelligence that expanded forever looking for raw material would have eaten up the Milky Way long ago.