Richard Brautigan’s 1967 poem “Watched Over by Machines of Loving Grace” has the hopeful, reverential tone of a prayer. What Brautigan longs for in the poem is a utopia without work, where humans are “joined back to nature, / returned to our mammal / brothers and sisters” through “cybernetic ecology”—humans being fundamentally too flawed to be trusted with their own paradise. Unfettered by personality, machines would be rulers without greed, fear, hate, or love, going about the drudgery of administering to human clients free of the disastrous trappings of the ego. It’s a political dream that Brautigan imbues with religious overtones. Not only would machines free us from toiling on the Earth; humans themselves would be transformed and returned to some Rousseauvian state of idyllic primitive bliss.

That’s a lot of faith to put in machines and a lot of blame to place on politicians. Regardless, the utopian vision of a cybernetic savior persists.

Brautigan was a middling writer of 1960s America, not a mold-breaking prophet. Long before Brautigan, the Greeks were trying to purge the insatiable animal appetites from politicians. Classicist Werner Jaeger writes: “A new idea of spiritual freedom now arose, to correspond to that development of ‘self-control’ as the rule of reason over desires. He who possessed it was the opposite of a man who was the slave of his own lusts.” In a world where there were literal slaves, this new idea held that a slave who was in control of his passions was actually liberated, and an aristocrat who lacked self-control was actually a slave. There arose a need to liberate aristocrats from their base desires, and this education of the full character became known as paideia.

Would a cybernetic ruler, a machine of loving grace, do any better? Would a robot politician, completely shorn of pettiness and self-interest be perceived as more trustworthy than a human pol?

The Greeks weren’t utopians. They knew that you couldn’t completely eradicate human weakness. But the attempt to transcend oneself, to improve one’s character, is a natural aspiration. One can look to modern elected American officials—pick almost any name—and lament their lack of self-knowledge, anemic rhetoric, paucity of wisdom, and wonder what they might have been had they been exposed to paideia.

Politicians aren’t popular in America. The most recent Gallup survey rated members of Congress as the least trustworthy profession—just below car salesman. Another poll places their popularity below cockroaches and traffic jams.

A hatred and distrust of politicians is something that most Americans agree on regardless of ideology. Is it the politician’s very humanity that we distrust? Would a cybernetic ruler, a machine of loving grace, do any better? Would a robot politician, completely shorn of pettiness and self-interest be perceived as more trustworthy than a human pol?

A recent survey conducted by Chapman University suggests that an artificially intelligent government might present its own problems, quite different from those currently afflicting the body politic:

For the survey, a random sample of around 1,500 adults ranked their fears of 88 different items on a scale of one (not afraid) to four (very afraid). The fears were divided into 10 different categories: crime, personal anxieties (like clowns or public speaking), judgment of others, environment, daily life (like romantic rejection or talking to strangers), technology, natural disasters, personal future, man-made disasters, and government—and when the study authors averaged out the fear scores across all the different categories, technology came in second place, right behind natural disasters.

And so the first problem with having an AI politician presents itself: We may loathe human politicians, but we’re more afraid of technology than we are of death.

Algorithms so completely permeate our day-to-day lives that it can be difficult for people to recognize when and how technology is helping them. Consumer devices like phones and laptops are obvious, but there are less visible things like the network of satellites used for GPS, distribution software used by power companies, and high-end medical equipment. On the other hand, abuses of cutting-edge technology have been prominent in the last decade: National Security Agency data collection, cyber warfare, hacks of financial information. Christopher Bader, a co-author of the fear study and a professor in sociology at Chapman University, recently articulated our fear of technology: “People tend to express the highest level of fear for things they’re dependent on but that they don’t have any control over, and that’s almost a perfect definition of technology. You can no longer make it in society without using technology you don’t understand to buy things at a store, to talk to other people, to conduct business. People are increasingly dependent, but they don’t have any idea how these things actually work.” In other words, people may fear technology, but does that fear even matter?

There’s no mass movement to completely scrap technological innovation. But there is a movement operating at the other end of the spectrum composed of people who embrace even greater hybridity between humans and technology as something not just inevitable, but desirable.

Zoltan Istvan is running for president on a transhumanist platform. Istvan, futurist writer and blogger for venues including Vice and Psychology Today, explained transhumanism to me as “using science and technology to radically change and upgrade the human being. It also means upgrading the human experience.” That definition might strike some as vague, but it’s necessarily so, since the particulars of transhumanist goals await clear definition in a post-human world.

The modern roots of transhumanism begin with biologist Julian Huxley, who used the term to title an article that described how humanity could use technology to “transcend itself.” The most recent iteration is promoted by futurist and Google employee Ray Kurzweil, champion of the ersatz eschatology of the Singularity, a technologist omega point beyond which humans will either be destroyed or become something that transcends humanity as we currently understand it. The goals of transhumanism are, as one might guess, utopian: using technology to eradicate racism, poverty, disease, and malnutrition—but also nations, death, and, you guessed it, politicians.

When I asked Istvan, currently touring America in his Immortality Bus, what American politics would look like in 100 years, he answered that “There won’t be an America in 100 years. I’m sure of that fact. That’s why I formed the World Transhumanist Party, whose mandate is to become the first democratically [elected] political party to run a world government.” When I asked him if he would want to be replaced by an AI politician, Istvan’s response was upbeat and matter-of-fact.

"Yes, absolutely," he said. "I would love to see a truly altruistic entity running our government. Right now, all politicians, including myself, are motivated by self-interest. This is just how humans are. So wouldn’t it be nice to have something like a super-intelligent AI running things and it be entirely after our best interest?"

Istvan might be the only politician in America running on a platform of eradicating not only national boundaries but his own job as well. His honesty is matched by his optimism.

Istvan is an ideal spokesperson for AI politicians. He’s upbeat, articulate, intelligent, and genuine. Reading his writing, it’s easy to become infected by optimism. But so many questions remain, the most important of which might be: How exactly would an AI politician work? Over at the Institute for Ethics & Emerging Technologies, a human enhancement and technoprogressive non-profit, the AI politician discussion mostly hinges on the negative personality traits of “meat-bag” politicians, specifically: vanity, rage/revenge, and sex addiction. Basically, the idea would be that an AI politician would have an ego (“if it has a drive for self-improvement ... it will have an ego”), but would be programed to turn off negative impulses that would get in the way of implementing policy or following the law. It would be paideia in binary code.

That’s helpful, but it doesn’t really answer the question. Would an AI political system just be some form of high-tech direct democracy? How would the AI know which policies to implement and which laws to uphold? Beyond its inability to be sexually aroused, where would the AI’s political authority derive from?

“I think if we set up a direct democracy—or a real-time democracy, as I like to call it—we could, as a people, instruct AI to do our bidding,” Istvan says. “I'm imagining everyone on their cell phones voting in real time for things they wanted—of course, in 20 years it won't be cell phone but cranial implants. But the idea is that this AI would administer where the people would want to take their nation, and their desires for government.”

I put the same question to James Barrat, documentary filmmaker and author of the book Our Final Invention: Artificial Intelligence and the End of the Human Era. Barrat was a bit more skeptical: “In the not-distant future, intelligence will be baked into the fabric of our surroundings, and powerful cognitive architectures will be in charge of our infrastructure of power, energy, finance, water, and transportation. We’ll be at the mercy of intelligent machines. Our survival will depend on being able to co-exist with them. In that sense machines will become our politicians. They’ll be responsible for making the most important decisions about our lives. I don’t think of that reality as an enhanced democracy. It’ll be more like Plato’s utopian city, in which we’ll be ruled by machines instead of philosopher kings. We have an opportunity now, and a responsibility, to make sure those machines are beneficial to humanity.”

Barrat is suggesting that, since hyperintelligent machines will eventually be beyond our control, and what we do now will go a long way in determining the “moral system,” if you want to call it that, of our future philosopher-king robots. And again, this poses another problem for AI politicians: If we assume that humans, at least in some capacity, remain “in control” of AI governance, then the system has to either be based on a direct democracy or function at the whim of whatever political goals the programmers put into code. The recently discovered malfeasance at Volkswagen, where engine control units were embedded with a few lines of code that allowed them to cheat emissions tests, is an example of how machines are only as “moral” as their programming allows them to be. If one thinks of the code we program into artificial intelligence as a sort of DNA, a symbolic representation of how our own minds structure the world and interpret reality, who’s to say that we wouldn’t also pass along our own weaknesses, perhaps even unintentionally? The transhumanist answer to these questions is faith: Faith that technology and artificial intelligence will one day advance to such a point that what came before won’t matter, and what comes after is absolutely unimaginable.

But definitely positive. Or negative, depending on who you ask.

The conception of progress put forward by some of the more radical transhumanists doesn’t sound merely religious—it sounds specifically Christian: history progressing toward a rupture point where man is born anew, made immortal and limitless in a time after time. Instead of asking whether transhumanism is based on good science, it might be more useful to question the rigorousness of it as a theology.

And what if the techno-rapture actually does happen and we’re stuck lounging blissfully in Brautigan’s cybernetic garden? What if every positive claim of transhumanists is realized, and AI politicians, at the cost of human agency, make us immortal in a paradise where, like in David Byrne’s heaven, nothing ever happens? What would human character be like with AI negotiating the hard work of paideia on our behalf? Could we be, as T.S. Eliot wrote, “dreaming systems so perfect no one will need to be good”?