by Elmo Keep and James Douglas

Elmo Keep: Greetings, fellow meat human. So good to connect with you over the networked machines.

James Douglas: Yes, Elmo. *Secret handshake* 01001100 01101001 01110110 01101001 01101110 01100111 00100000 01100110 01101111 01110010 01100101 01110110 01100101 01110010 00100000 01110111 01101111 01110101 01101100 01100100 00100000 01110011 01110101 01100011 01101011 00101100 00100000 01100001 01100011 01110100 01110101 01100001 01101100 01101100 01111001 00101110 00100000

I’m interested in understanding something about the genealogy of the ideas that make up the cosmology of Tech Thought today, since that industry continues to have such far-reaching social and political effects. Your recent Verge piece is very diligent about providing this sort of historical insight! Especially, re: how so many intellectual and research pursuits in current tech culture — and, with this transhumanist candidate for President, politics — can be traced back to the Extropians and their mailing list. These are/were an interesting bunch, it seems?

EK: Transhumanism and contemporary techno-futurism are closely interlinked, and transhumanism is deeply influenced by sci-fi, though more so what might be called hard sci-fi — cyberpunk, particularly (Joshua Raulerson has a great book on this, Singularities). The most influential set of ideas in modern transhumanism came from Max More (Max O’Connor) and T. O. Morrow (Tom Bell, a lawyer) in the form of Extropy. Extropy was an ad-hoc philosophy they cobbled together which positioned itself in opposition to entropy; human beings would overcome death and decay through technology. It was extremely hostile to the State, to governing systems of all kinds. Particularly to paying taxes. It was staunchly Randian Libertarian, individualist and techno-utopian in its outlook. It was also advocating for what is basically eugenics: a class/race of superbeings with augmented hyperintelligence and physicality who through these enhancements will be literally unstoppable and colonise the universe.

Extropy began as a zine, Vaccine For Future Shock, later rebranded the Journal of Transhumanist Thought, and was then an institute (since closed because the goals of Extropy, they decided, had been realised by 2006.) The zine soon migrated to the mailing list and is still active today. It exists in various dumps of archive sections but there isn’t a complete archive anywhere — a lot of it is lost to the offline wilds and dead links, which is itself not a great indicator for their hopes of a fantastic future filled with currently unimaginable technologies.

The Extropians had — what can look to be from here — an undue amount of influence on tech culture and industry of the last twenty-five years. This is explained a little by the nascent form of the web at the time. There were comparatively few mailing lists to belong to then — pre-Reddit and giant message boards — so it’s not unusual that a bunch of people congregated on what was one of the most well-known lists at the time, in the early nineties. But it was very much a scene (they loved to throw conferences), and what is unusual about it is where a lot of those people ended up, and what their connections were, and what motivated and informed their points of view. It was also unusual in that people were using their real names there, by and large. Weird not just because anonymity and the fight against it would come to dominate the social web, but because the Extropians were extremely concerned with privacy, cryptography in particular.

Original ad for the zine in bOING bOING, then also a zine.

Anyway, details!

JD: I guess that’s the benefit of taking the epochal view, right? You never need to sweat the small things.

EK: Quite. The Extropian list was home to the lengthy discussion of the passions of transhumanists. These included everything from dropping acid and polyamory to space exploration, physics, philosophy and gene therapy. It was all pretty outre and very much part of the Bay Area’s existing counterculture lineage. It was about blowing minds, dude (they had a special handshake). There were elements of transhumanism that were talked about more than others on the list. Of special interest were science fiction, cryogenics, cryptography, anonymous digital cash, nanotechnology, the Singularity, artificial intelligence, mind-uploading, smart drugs, immortality, cybernetics, robotics, and how much the Government sucked.

No one outside this tiny clique really knew much about the Extropians or who they were until Ed Regis profiled the founders for one of the first issues of Wired. “Meet The Extropians” is a portrait of a moment in time so quaintly dated it’s a weird sort of curio now.

JD: The Extropian party scene described by Regis sounds positively loose. I have my doubts that hanging out in suburban hot tubs is a particularly healthful pursuit (because, to me, they are disgusting incubators of filth) but I suppose if you expect to be living forever a little Legionnaires’ disease won’t bother you. Will cyborg bodies get pruney skin during bath time?

EK: Well, physical perfection and ideals of beauty were very important to the Extropians, so wrinkled skin of any kind at any time would have been thought an aberration.

JD: Here’s a quote from the Wired Extropian piece:

The overall goal is to become more than human — to become superhuman, “transhuman,” or “posthuman,” as they like to say — possessed of drastically augmented intellects, memories, and physical powers. The goal is a society based on freely chosen social arrangements, on systems of self-generating “spontaneous order,” as opposed to massive legal structures imposed from above by the State. And the goal is to gain as complete control over the physical universe as is compatible with natural law.

This notion of “spontaneous order” is an interesting one, not least because it is so persistent, and pervasive. There’s neat stuff on this train of thought in episode two of Adam Curtis’ All Watched Over By Machine of Loving Grace, where Curtis takes a look at Californian technologists’ obsession with networked societies. He argues that the conviction that digital technology could produce decentralised power structures was inspired by ecological theories of environmental “balance” that were actually totally bogus. He concludes, by analogy with the collapse of hippy communes in the seventies, that decentralised power structures tend to conceal hidden hierarchies, no matter how hard they pretend to steer toward a “natural” equilibrium.

EK: I mean, if video games can’t unite people in a self-organizing utopian social order, then what can?

JD: What is the sharing economy but this ecological fantasy in a contemporary guise — an ostensibly “free” market of consumers and service providers, more or less overtly structured by putatively neutral mediators like Uber or Airbnb? I guess it’s convenient that “unregulated” markets not only cater to the fantasy of the superpowered (economic) actor, but also open up a huge structural chasm in which businesses can insert themselves and extract profit. It’s almost as if, maybe, the techno-idealism of this sort of thinking conceals, barely, a mess of fairly quotidian capitalist imperatives?

EK: Quotidian capitalist imperatives paired with “on demand” efficiency. I think in the case of contemporary SV, this is particularly insidious, and not even vaguely concealed. It’s putting two terrible human impulses together and stoking them to the point where they are now the driving factors of tech innovation: How can I get something as cheaply as possible and as close to immediately as possible, regardless of how many people get screwed in that process? It rewards both intolerable impatience and being a cheapskate. How terrific that this is where we have evolved to.

But like you say in your piece, anything can be recast as heroic if you spin the right origin myth, preferably one well-versed in Star Wars. I wish I could find where I read it, but I read someone on Twitter a while ago saying that the on demand economy can be summed up as, How can I get someone to do what my mother used to do for me? (My laundry, bring my food immediately, pick me up from wherever I am.)

JD: But, also, infantilization grossly reconceived as rationalism: as though making things faster, easier, and more immediately gratifying were the only pursuit worth any mental energy. This confusion of intellect with total selfishness maybe helps explain why when these folks start to talk about creating genuine artificial intelligence they immediately start worrying about it killing everybody. Like, whoa there: Speak for yourself.

EK: The obsession AI researchers have with what they perceive to be “rationalism” is hubris-laden (harking back to extropian ideas of overcoming natural laws with rational thinking.) They are convinced not only that an AI that could spin out of control will exist, but that only they will be able to prevent it from killing everyone. And the way they propose to do this is to somehow program it in accordance with these utilitarian principles of “rationality” at its core. This is how they assume to impose a moral framework on a machine: If it only ever makes “rational” decisions, then it will only ever be helpful. Rational according to who? According to what set of principles? Agreed on by whom? “Helpful” how? Apparently anyone who doesn’t understand these largely mathematical modes of decision theory is not qualified to have any say in this — a machine that is meant to be able to mimic the intelligence of a human being. This again comes back to the fantasy of self-organising technological structures being able to make better decisions than actual human beings, as if the mess of society could just be done away with if only there were powerful enough equations or algorithms. Equations authored by right set of geniuses.

JD: Another “rational” solution. Of course.

EK: The ways in which this assumes extremely (relativistic) simple solutions to unfathomably complex ideas of social order is terrifying. These are the people who have appointed themselves the only ones smart enough to understand any of this — their own suppositions. Nick Bostrom spearheads a great deal of it, which he started on once he joined and spent years on the Extropians mailing list, founded the World Transhumanist Association, and then established a series of think tanks and university departments dedicated to these extremely flimsy transhumanist visions of society. Only now they have actual money behind them and significant coverage in the mainstream press.

JD: Also terrifying is not just the fact that the “solutions” are so simple, but the fact that they are so intellectually tenuous, and yet so still resilient — they keep cropping up in fun new, extremely influential places.

EK: These ideas have been around for a long time. In Tim May’s hundred-and-sixty-thousand-word Cyphernomicon (all in red text on black! #rememberthe90s), which he wrote after becoming flush with a fortune made at Intel and which was originally conceived as a sci-fi novel, he puts forth the idea of cyberspace, as it was then called, being a lawless zone in which people could pursue whatever freedoms they wanted, free of government interference or surveillance. The Cyphernomicon was a formalisation of everything happening on the Cypherpunks mailing list, which had a very significant overlap at the time with the Extropians list. It also alluded to a sci-fi novel by Vernor Vinge (who first posited the Singularity), True Names, which was a fictional vision of what this internet could be. (Neal Stephenson’s Cryptonomicon was a kind of allusion to the Cyphernomicon — a speculative novel about cryptography, offshore data havens and digital cash where all kinds of experimental research projects are possible. It was compulsory reading for all PayPal employees.)

A sacred cypherpunk tenet was “cypherpunks code,” as in they actually implemented these things, and worried (or not) about any consequences later. This is where PGP and TOR came from — and eventually what became Bitcoin and Wikileaks also came out of that list. Thiel’s mantra, “Don’t Ask For Permission, Ask For Forgiveness,” as widely adopted now in SV start-up culture, is just an updated take on “cypherpunks code.” It’s the same impulse, to circumvent existing structures through technology (break shit), and worry (if you do at all) about the consequences later. The very staunch Libertarianism that is so alive in Silicon Valley now has this long, odd pedigree.

JD: It’s sort of wild to think about how close knit and promiscuous the Venn diagram of Extropian-associates and contemporary tech financiers/developers/thinkers is — and wilder still when counting in the circle of related culture industry operatives, like Stephenson, or employees at Pixar and Lucasfilm. These kinds of connections honestly get my brain churning. I recently read Foucault’s Pendulum, by Umberto Eco, which is about a guy who is so obsessed with uncovering the contours of a half-imagined conspiracy that he ends up inventing the most complete, most plausible version of that conspiracy yet, and all its adherents hunt him down, convinced that he possesses Secret Knowledge (I think it’s a parable about epistemology or something).

EK: Conspiracy theories are so fun right up until people start believing them.

JD: Anyway, I feel like it’s easy to dip into a paranoid frame of mind when thinking about the many interrelations between the tech industry, and the culture industry and the thinkers that cross over and speak to both. I don’t want to ascribe too much weight to the associations between them, but they paint a very interesting picture? Basically, what I’m asking is: Is there a not-so-secret cabal of science fiction geeks (and hot-tubbers) trying to change the world for the worse, using technology?

I like the little timeline of Transhumanism you spool out in your Verge piece, which includes the 1931 appearance in Amazing Stories of “The Jameson Satellite” by Neil R. Jones, about a man whose frozen brain is rescued by cyborgs and installed in a robot body. That story, you write, went on to inspire Robert Ettinger, founder of the Cryonics Institute. (You also mention “The Altered Ego,” by Jerry Sohl, as an early instance of mind-uploading in fiction.) In my Lucasfilm thing, I wrote that “The history of the tech industry can, in a way, be traced by its inspiration from and adoption of the idealist nostrums of traditional science fiction, like artificial intelligence, virtual reality, and extended lifespans,” which is a pretty loose claim, to be honest, but also, I hope, one that suggests something reasonable about the visible interplay between science fiction writers and their audiences, and futurists, and techno-prophets, and, now, tech developers and VCs.

And, okay, so my use of the “geek” label is very tendentious, but it maybe fits my image of the kind of person who is a little too immersed in the emotional comforts and power fantasies offered by science fiction (or fantastic fiction, or whatever). Of course, perfectly sensible adult humans can be intellectually engaged by sci-fi — and emotionally rejuvenated by heroic stories like Star Wars — but a person who actually strives to live within the sentimental frameworks of fantastic fiction by actualising its technologies into everyday life is maybe in the grip of some kind of arrested development?

EK: I’ve never actually seen a Star Wars film. I made a very bad mistake in trying to correct that by watching A Phantom Menace first, which was… an error. I thought you were meant to watch them in that order! In any case, I really love how excited people are about The Force Awakens; it seems like it lived up to all they hoped it would be. Which would be so nice, for something you have very fond memories of to actually match up to your hopes. Not nice enough to build your technocapitalist Galt’s Gulch around, though.

JD: I keep thinking about Thiel’s essay for Cato Unbound, in which, among other things, he proclaims his own heroic ambitions for tech funding. In it, he writes “I remain committed to the faith of my teenage years,” by way of explaining his dedication to “authentic human freedom.” I take it that this is meant to seem like a principled stance, but it’s also kind of embarrassing to see an adult admit that? Adolescent perspectives on “freedom” (or, well, middle-class, white, adolescent perspectives) may or may not be meaningful, on occasion (even a stopped clock is right twice a day), but imagine taking them seriously as a basis for a real-world social and technological program!

Aspects of the neo-reactionist political project (which has been linked to Thiel) seem palpably to run along the same lines, to me: a childish vision of liberation denuded of any actual social responsibility, or any ability to think clearly about the welfare of others. Representative democracy has some problems, but to wholly reject it in favor of autocracy seems a little like the perspective of a small child unwilling to share its toys or lose its comforts. And so, we have projects like Seasteading, where you get to live basically in Waterworld and never have to do what your parents tell you to do, or Alcor, where you never have to really say goodbye to anyone you love.

This juvenility could seem kind of cute, or at least harmless, except that kids are actually terrible at thinking about the needs of others, much less making considered moral decisions. In your piece you argue pretty convincingly that transhumanism is something like a diversion from actual, present political, social, and environmental concerns — which I think suggests something true about how the kind of mentality we’re talking about can end up missing the forest for the trees. Do you think the transhumanist movement is kind of, at its core, unethical?

EK: In the same way they seek to be post-human, transhumanism also seeks to be post-ethics, via technology. If you look at the extreme end of transhumanism, the Extropian end, which is really where the contemporary strains of it came from, they saw ethics as completely irrelevant. They advocated for what they called anarcho-capitalism, where, through technologies like digital cash (untraceable digital cash was absolutely key to this mission, the development of what became bitcoin was the number one goal of the Cypherpunks, and feverishly discussed among Extropians too, almost above privacy and personal security), state financial institutions would be made obsolete and dismantled totally. PayPal grew out of these same impulses, to code a digital payments system that couldn’t be monitored by government regulation. It was often pitched as being a very noble goal — absolute privacy from the intrusion of scrutiny–but what it also enabled was hiding assets and avoiding taxes.

So this, I think, ties pretty directly into what you’re saying; this brand of Libertarianism is basically, “don’t touch my stuff.” Don’t take any of my money to give to other people, why should my success pay for poor people’s welfare? I should not have to be part of any collective, just let me live in my own little castle. I don’t want to share my toys.

JD: If you spend enough time rationalising your own selfishness and inflated self-image it’s inevitable that you’ll end up committed to some outré political positions.

EK: This was a mission to wholly transcend the State, which they saw as an oppressive tool. Not in the sense that its capitalist expression enslaves almost everyone, most egregiously minorities, but oppressive in that it made them pay tax. You see this very much alive in Silicon Valley today, whether it’s giant corporations hiding their assets to avoid taxes, “philanthrocaptialism” as practised by Mark Zuckerberg, or Peter Thiel’s dream of “changing the world” through circumventing regulation altogether by literally offshoring experimental research and then worrying about the consequence for society later. This is taking “break shit” to a whole new level, “shit” being “what does it mean to be human?”

JD: Here’s where that “changing the world for the worse” suspicion comes in. Re: that “break shit” mentality, in one of his essays, Max More proposes that a key Extropian mode of thought is “Dynamic Optimism”: “an active, empowering, constructive attitude that creates conditions for success by focusing and acting on possibilities and opportunities.” Which is just an incredibly revealing point of view, not just because that principle is so capitalist to its core (the market abhors a vacuum, after all), but for how it dresses up an disinclination to anticipate adverse consequences as a virtue.

EK: Everything was beautiful and nothing hurt.

JD: Not sure how your short-term wants will affect future generations? Don’t worry; just do it (that cypherpunks code, again). It’s interesting, too, how much More’s five steps for Interpreting Experience Positively anticipates aspects of the current tech industry’s resistance to criticism that John Herrman has written about here recently (I mean, step one is actually called “Selective Focus”).

EK: It is in this way an incredibly infantile mindset: You are so special and unique and so much more intelligent than everyone else that the rules do not apply to you. Everyone else is a luddite idiot stuck muddling through their archaic democracy. And that’s where you end up at neoreactionism and its “Sith Lords,” as you point out.

JD: It’s hard not to feel a bit gently sympathetic when tallying the effects of this arrested development/techno-utopian heroic fantasy. Except, apprehensive, also, given the billions of dollars and the cultural and technological resources at its adherents’ disposal.

EK: It seems like the worst thing that could ever happen to a person would be to never mature intellectually or morally or emotionally beyond adolescence and then somehow come into incomprehensible amounts of money.

JD: Not so good for the rest of us, either.

James Douglas and Elmo Keep are just two meat humans with meatish concerns and worries.

Gif via MachinePix