Deep Dreams of Tomorrow

Science fiction tells us that a change in a past event, caused by the intervention of a time traveler, will open up a parallel timeline that leads to an alternate present. The example that comes to mind, for some reason, is Back to the Future, Part II. After an unexpected disturbance in the spacetime continuum, Marty McFly visits a world in which Biff Tannen, his father’s high school bully, has gone from unscrupulous small-time businessman to a replica of our current president.

If you accept this idea, it raises the stakes of the present moment: each decision leads not to one inevitable outcome, but a multitude of possible futures. The passage of time isn’t a story, following a hero’s journey from “call to adventure” to “return home.” It’s a website with a series of links, each of which leads to a subsequent series of links. You may begin an evening by reading the Wikipedia entry for tulips or graham crackers, and, depending on the decisions you make, find yourself becoming an expert on Jeffrey Dahmer or Zermelo–Fraenkel set theory by dawn. Unlike the linear media of the printed page, time branches out into alternate possibilities, corresponding to what sociologist Ted Nelson, anticipating the internet decades before its invention, named hypermedia.

On July 23, 2010, Roko, a user of the online forum LessWrong, accidentally opened up a new timeline. LessWrong is a community dedicated to the advancement of rationality, overseen by Eliezer Yudkowsky, a co-founder of the Machine Intelligence Research Institute (MIRI). In Harper’s, Yudkowsky characterized its project as a “New Enlightenment.” The forum is a hub for discussion of the Singularity, a vision of the future that anticipates artificial intelligence both surpassing the human mind and merging with it. Yudkowsky’s aim is to make sure that any future sentient machine — a “superintelligence” — is interested in peaceful coexistence with its makers. Rather than the violent mercenary of Terminator, the altruistic companion of Terminator 2.

The Terminator himself accounts for his Manichean mutability in the second film. “My CPU is a neural-net processor,” he says, “a learning computer.” The direction of actually existing artificial intelligence has followed this path, increasingly deploying a method known as “machine learning.” The New York Times recently reported on Google’s application of machine learning to their translation function, generating a paradigm-shifting improvement that caused a global stir among followers of AI. The result is said to be closer to the elusive open-ended general intelligence that humans possess even in infancy, rather than the goal-oriented algorithmic intelligence to which machines have traditionally been limited.

Instead of being programmed with a set of grammatical rules and a dictionary of vocabulary, Google’s new “neural network” examined volumes of phrases, sentences, and paragraphs in multiple languages, and drew its own conclusions. Like an infant learning a first language, it learned through observation rather than computation. Of course, like a child, a program needs a parent for guidance, and programmers had to monitor and correct its behavior. And like a child, a program will be both eager to please and prone to disobey.

This tendency is brought into stark relief in Google’s Deep Dream program, in which a neural network scans an image for recognizable patterns, attempting to identify its contents the way a human would. The program produces evidence of its thought process by superimposing other corresponding images onto the original. Google’s image recognition system, trained by its programmers to recognize human faces and differentiate between kinds of pets, sees eyes and dogs everywhere. The desires, conscious and unconscious, of the machine’s creators are inevitably implicated in its ostensibly autonomous development.

If the builders of technology are transmitting their values into machinery, this makes the culture of Silicon Valley a matter of more widespread consequence. The Californian Ideology, famously identified by Richard Barbrook and Andy Cameron in 1995, represented a synthesis of apparent opposites: on one hand, the New Left utopianism that was handily recuperated into the Third Way liberal centrism of the 1990s, and on the other, the Ayn Randian individualism that led more or less directly to the financial crisis of the 2000s.

But in the decades since, as the consumer-oriented liberalism of Bill Gates and Steve Jobs gave way to the technological authoritarianism of Elon Musk and Peter Thiel, this strange foundation paved the way for even stranger tendencies. The strangest of these is known as “neoreaction,” or, in a distorted echo of Eliezer Yudkowsky’s vision, the “Dark Enlightenment.” It emerged from the same chaotic process that yielded the anarchic political collective Anonymous, a product of the hivemind generated by the cybernetic assemblages of social media. More than a school of thought, it resembles a meme. The genealogy of this new intellectual current is refracted in the mirror of the most dangerous meme ever created: Roko’s Basilisk.

The Simulated Afterlife

The primordial soup that led to the Basilisk’s genesis is transhumanism, the discourse of Singularity as personal narrative. For some of its advocates, most famously Silicon Valley icon Ray Kurzweil, the animating desire of building machine intelligence is apparently apolitical. It is the ancient fool’s errand, most famously enacted in the legend of the fountain of youth: the desire to eliminate mortality. If we can bring a machine to life, we should be able to bring someone who has died back to life. We will accomplish this by inputting information about that person into a program, which will then run a simulation of that person so accurate it will be indistinguishable from the original. In anticipation of this eventuality, Kurzweil keeps a storage unit full of his father’s old possessions, whom he intends to resurrect by means of feeding information into a superintelligent computer.

If you were to be duplicated in an exact replica, including not just all of your bodily characteristics, but every one of the thoughts and memories that has been physically engraved onto your brain, would that replica be you? This is a problem that troubles both philosophers and scientists, but not Ray Kurzweil. “It would be more like my father than my father would be, were he to live,” he told ABC News.

Hedging his bets, Kurzweil himself fends off the threat of expiration by taking hundreds of nutritional supplements a day and receiving weekly vitamin injections. In order to make it to the year he predicts the Singularity will take place, he will have to live until 2045, when he will be 97. Kurzweil is controversial even among those who share his outlook, but it’s a widespread assumption among Singularitarians that death is not the end.

Unfortunately, Roko discovered a drawback to superintelligent resurrection. His post speculated that once the AI comes into being, it might develop a survival instinct that it will apply retroactively. It will want to hasten its own birth by requisitioning human history to work towards its creation. In order to do this, it will institute an incentive that dictates how you will be treated after you come back to life. Those of us who know about this incentive program — and I’m sorry to say that this now includes you — will be required to dedicate our lives to building the superintelligent computer.

Roko gave the example of Elon Musk as someone who has the resources and the motivation to make a worthy contribution, and will be duly rewarded. As for the rest of us, if we don’t find a way to follow through, the AI will resurrect us via simulation and proceed to torture us for all eternity.

This is a simplification of Roko’s post, and if you don’t understand Bayesian decision theory, it may seem too silly to worry about. But among the rationalists of LessWrong, it caused panic, outrage, and “terrible nightmares.”

Between Fiction and Technology

Yudkowsy responded to Roko’s post the next day. “Listen to me very closely, you idiot,” he began, before switching to all caps and aggressively debunking Roko’s mathematics. He concluded with a parenthetical:

For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us.

The name “Roko’s Basilisk” caught on during the ensuing discussion, in reference to a mythical creature that would kill you if you caught a glimpse of it. This wasn’t evocative enough for Yudkowsky. He began referring to it as “Babyfucker,” to ensure suitable revulsion, and compared it to H.P. Lovecraft’s Necronomicon, a book in the horror writer’s fictional universe so disturbing it drove its readers insane.

Yudkowsky’s point was that the incentive couldn’t have existed until someone brought it up. Roko gave the not-yet-existing AI the idea, because the post will now be available in the archive of information it will draw its knowledge from. At another level of complexity, by telling us about the idea, Roko implicated us in the Basilisk’s ultimatum. Now that we know the superintelligence is giving us the choice between slave labor and eternal torment, we are forced to choose. We are condemned by our awareness. Roko fucked us over forever.

Like all fables, there is a moral to the story of Roko’s Basilisk. But rather than an expression of a value system, it offers a theory of cause and effect. Michael Anissimov, former media director of MIRI, expressed this idea in a statement that Ray Kurzweil quoted in his manifesto, The Singularity Is Near: “One of the biggest flaws in the common conception of the future is that the future is something that happens to us, not something we create.”

Roko’s Basilisk isn’t just a self-fulfilling prophecy. Rather than influencing events toward a particular result, the result is generated by its own prediction. The implications blur the boundaries between science and fiction. The archives from which an artificial intelligence draws data will contain the work of both Ray Kurzweil and H.P. Lovecraft, and it may not distinguish between them the way we do. Instead of Kurzweil’s world without death and disease, it may attempt to build Lovecraft’s R’lyeh, a loathsome city in the sea that exists on a plane of non-Euclidian geometry.

There isn’t a word for this cause-and-effect relationship in ordinary English, but, in the mid-nineties, the philosopher Nick Land coined one: hyperstition, that which is “equipoised between fiction and technology.” This neologism describes something more than a superstition, something beyond belief — a description with divine power. In the beginning was the Word.

What kind of future are we creating? Both Nick Land and Michael Anissimov have been clear about their vision for the world of tomorrow. They are self-professed neoreactionaries.

The Genealogy of Amorality

Neoreaction, or NRx, is an esoteric political doctrine of recent vintage. It became the locus of controversy in early 2017, after London art gallery LD50 convened a conference and exhibition featuring NRx ideologues, including Land, white supremacist journalist Peter Brimelow, and Anders Breivik sympathizer Brett Stevens. Protesters forced the gallery to shut down.

But the movement has less lofty origins than the currents of reactionary chic in contemporary art. In an article on Breitbart called “An Establishment Conservative’s Guide to the Alt-Right,” Allum Bokhari and Milo Yiannopoulos identified neoreactionaries as the intellectual vanguard of the movement, noting that they “appeared quite by accident, growing from debates on LessWrong.com.” Thought experiments in dispassionate rationality had led some users of the forum to dark places. Eliezer Yudkowsky has as much patience for it as he did for Roko. “I am actively hostile to neoreaction,” he has written.

Given the hostile work environment, Anissimov left MIRI in 2013. He opened a competing forum that would be more hospitable to neoreaction, the now defunct MoreRight, and started a publishing company. He has since written and self-released books like Our Accelerating Future, A Critique of Democracy, and Idaho Project, “a white nationalist manifesto that integrates futurism, survivalism, and simple common sense into a proposal for concrete action.”

Anissimov is a follower of the Italian fascist philosopher Julius Evola, whose work, The New York Times has reported, is probably also on Steve Bannon’s bookshelf. Given the prevalence of the alt-right on forums like 4chan, it’s not a great leap from the Californian Ideology to extreme reactionary views. As Angela Nagle has written in Jacobin, the “creative energy” of the alt-right is the product of a synthesis of an “amoral libertine Internet culture” with appeals to white male identity and resentment — not an uncommon demographic in Silicon Valley. Mother Jones has reported that according to neo-Nazi Andrew Anglin, Santa Clara County, where Apple and Intel are based, is the largest traffic source for his widely read white supremacist website The Daily Stormer. Anissimov may simply have been the Valley’s foremost innovator.

In contrast, Nick Land took a more serpentine path. A month before the 2016 election, Land made his first appearance as a columnist at The Daily Caller, the right-wing news outlet founded by Tucker Carlson. “Democracy tends to fascism,” he wrote, presenting a series of coy abstractions that betrayed his philosophical roots but withheld his political beliefs.

Land is an unlikely conservative media pundit, and a strange bedfellow of the alt-right. But like Roko, his writing helped bring the monster into being.

An Invasion from the Future

“In any normative, clinical, or social sense of the word, very simply, Land did ‘go mad,’” writes Robin MacKay, in the introduction to Land’s essay collection Fanged Noumena. MacKay was Land’s student at the University of Warwick, first encountering him in 1992 through a course called “Current French Philosophy.” He remembers him as a sort of cyberpunk absent-minded professor, “quivering with stimulants” while generating cryptic texts on an “antiquated green-screen Amstrad computer.”

Land had published a single book, a study of Georges Bataille called The Thirst for Annihilation. But the landscape changed in 1995, when Sadie Plant, a self-described “cyberfeminist,” joined the Warwick faculty. Plant established a department called the Cybernetic Culture Research Unit (Ccru), dedicated to the study of matters like science fiction, cryptography, jungle music, H.P. Lovecraft, and, of course, French philosophy.

In contrast to the stolid logical procedures of Anglo-American philosophy of the day, the Ccru called their delirious missives “theory-fiction.” They took their cues from the intellectual currents that emerged in the wake of the May ‘68 uprisings in Paris, particularly Gilles Deleuze and Felix Guattari’s Anti-Oedipus and Jean-Francois Lyotard’s Libidinal Economy. These works reckoned with the suppression of resistance and the consolidation of state power that followed the fading of the anti-capitalist spirit of the late sixties.

Deleuze and Guattari set out to describe “the most characteristic and the most important tendency of capitalism,” which they called “deterritorialization.” While in traditional societies the “material flow” of production was regulated by the division of the earth, capitalism set it loose. Yet if capitalism liberated production temporarily, it also tried to counteract this tendency by reinstituting forms of “territoriality,” bringing “all its vast powers of repression to bear” on the very forces that drove its unparalleled flows. The path to emancipation, they argued, was not to withdraw from capitalism, but to “accelerate the process.” Lyotard took this tendency in the opposite direction, in what he would come to proudly call his “evil book.” Workers, he said, desire their own oppression. Far from seeking emancipation, they “enjoy swallowing the shit of capital.”

If Ronald Reagan and Margaret Thatcher had served up an all-you-can-eat shit buffet in the 1980s, promoting the free market at the expense of the majority of their citizens, the Ccru responded by taking laissez-faire economics to a perverse extreme. They saw capital itself as the protagonist of history, with humans as grist for the mill. “What appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy’s resources,” Land wrote in his essay “Machinic Desire.” For Land, the Basilisk was already here.

At the time, Benjamin Noys took note of this philosophical trajectory, initially calling it “Deleuzian Thatcherism.” Eventually, in his 2010 book The Persistence of the Negative: A Critique of Contemporary Critical Theory, he gave it a pithier name, the application of which has been both broadly extended and hotly contested: accelerationism. Noys focused his critique on a particular misreading of Marx as a hybrid technological determinist and catastrophist, which licensed the idea that if the accumulation of capital generates and exacerbates the conditions that lead to its dissolution, then it is the duty of radicals to urge capital to fully realize and hence negate itself. Broadly conceived, the futurist telelogy this term denotes demonstrates the basis for its alignment with the Singularitarian ideology, seeing the exponential growth of technology as the key to the next stage of human species.

In 1997, Plant abruptly resigned her post at Warwick. Land took over. That year, journalist Simon Reynolds wrote a magazine profile of the Ccru, and the Director of Graduate Studies at Warwick’s Philosophy Department denied its existence. There was a procedure that had to be completed to establish a department, requiring paperwork that Plant had never bothered to file.

“Officially, you would then have to say that Ccru didn’t ever exist,” he told Reynolds. “There is, however, an office about 50 metres down the corridor from me with Ccru on the door, there’s a group of students who meet there to have seminars, and to that extent, it is a thriving entity.”

Regardless, the Director promised, “that office will disappear at the end of the year.” Throughout 1997, this nonexistent entity was prolific. MacKay remembers Land living in his office, rarely sleeping. According to philosopher Simon Critchley, Land “produced disciples” by the force of his cult of personality. “You’d go and give a talk at Warwick,” he recollected in Frieze, “and be denounced by people with the same saliva-dribbling verbal tics as Nick and wearing similar jumpers.”

Land eventually began to claim he was “inhabited by various ‘entities,’” named Cur, Vauung, and Can Sah. His work increasingly defied comprehension, sometimes departing from language altogether in favor of invented alphabets and number systems. “It’s another life,” Land told MacKay. “I don’t even remember writing half of those things.”

After the Ccru disappeared, Land disappeared too. He resigned from Warwick in 1998 and resurfaced in the new millennium as a journalist in Shanghai, writing patriotic newspaper op-eds, travel guides, and the occasional theory-fiction.

The afterlife of a self-described “malfunctioning academic” wouldn’t necessarily bear mentioning if not for Land’s unexpected alliance with a different kind of thinker. On April 22nd, 2007, a character named Mencius Moldbug had made his public debut on a blog of contrarian commentary called 2blowhards, with an essay titled “A Formalist Manifesto.”

The Exit Sign

“The other day I was tinkering around in my garage and I decided to build a new ideology,” Moldbug began. 2blowhards provided only a vague description of the manifesto’s author, formerly a regular in the site’s comments section. He had “made a score in a recent dot-com boom,” allowing him to spend $500 a month on books. Moldbug responded to nearly every reply in the post’s comments. A week later, he had started his own blog, Unqualified Reservations.

His ideology was idiosyncratic, centered on a reverence for Thomas Carlyle, a Victorian-era essayist best-known for his advocacy of the “Great Man” theory of history. He also incorporated measured respect for Austrian classical liberal Ludwig Von Mises and individualist libertarian Murray Rothbard, who were on the right track but didn’t go quite far enough.

Over the course of thousands of words, most of them superfluous, Moldbug moved from “formalism” to “neocameralism,” in tribute to the bureaucratic procedures of Frederick William I of Prussia. Finally, in July 2010, the same week as Roko’s fateful post, libertarian blogger Arnold Kling referred to Moldbug as a “neo-reactionary.” The name stuck.

In his earthly life, Moldbug is Curtis Yarvin, a software engineer who is the brains behind a startup called Urbit, the purpose of which evades explanation even for its inventor. Yarvin’s prose is excruciating, but he won a sizeable following for reliably flaunting convention and defying decorum. “Very few of Moldbug’s fans have read anywhere near his entire corpus,” Michael Anissimov admits, but most have noticed his amoral disquisitions on the relative merits of obvious injustices like slavery, and his opposition to democracy in general.

One fan who does seem to have read Yarvin’s entire corpus is Nick Land. In 2012, he took it upon himself to systematize the Moldbug ideology, and with his typical flair for denomination, christened it “The Dark Enlightenment.” His sequence of essays setting out its principles have become the foundation of the NRx canon.

If it’s hard to imagine Milo Yiannopoulos or Tucker Carlson pondering Land’s interpretation of Lyotard, it’s just as hard to comprehend Land’s infatuation with Yarvin. It’s a strange intellectual path that begins with “Current French Philosophy” and settles on a right-wing Silicon Valley blogger whose writing is more Dungeons and Dragons than Deleuze and Guattari. Whatever the cause, Land has gone from prophet to apostle.

Along with Yarvin, Land cites a 2009 essay by Peter Thiel for libertarian publication Cato Unbound, which famously announced, “I no longer believe that freedom and democracy are compatible.” Thiel went on to envision “an escape from politics in all its forms,” which Land interprets using an opposition that had been introduced by political scientist Albert Hirschman, between voice and exit. The terms describe the ways of exercising rights in a society with which a citizen has grievances; voice is participation in a democratic process that can lead to reform, while exit is the departure to a different society. A provisional example Land offers is white flight, the mid-century exodus of affluent caucasian families to the suburbs.

Neoreactionaries don’t advocate any kind of central social organization. Land envisions a “gov-corp,” a society run like a company, ruled by a CEO. Instead of petitioning a government for redress of grievances, unsatisfied customers are free to take their business elsewhere. If this sounds medieval, neoreactionaries don’t deny it — Yarvin sometimes describes himself as a “royalist,” or a “monarchist,” or even a “Jacobite,” in reference to 17th-century opponents of parliamentary influence in British government.

The question is, where do you go after exiting? NRxers don’t dismiss the idea of competing gov-corps on the same land mass, an idea anticipated by NRx intellectual forefather Hans Herman-Hoppe, an extreme libertarian political scientist, who advocates for a system that he admits is essentially feudalism. On a more abstract level, the neoreactionary fascination with bitcoin imagines the escape to an alternate economy unencumbered by federal regulation. Even Yarvin’s startup, Urbit, seems to be oriented towards exit: it promises an alternative internet inaccessible to outside users.

But the most utopian (dystopian?) wing of NRx literally aims to build Lovecraftian cities in the sea. This project, called Seasteading, is championed by Yarvin’s on-and-off co-conspirator Patri Friedman, whose grandfather Milton Friedman happens to be the economist responsible for the most extreme free market policies in the modern world. Peter Thiel was once Seasteading’s principal backer, as well as an investor in Urbit.

It’s not hard to see why floating sovereign states, out of any existing nation’s jurisdiction, would appeal to the super-rich. At their most innocuous, they might serve as an extension of an offshore bank, allowing for evasion of any type of redistributive tax policy. They also bring to mind the activities of wealthy men like Jeffrey Epstein, who used his private Caribbean island to throw bacchanalian parties for his millionaire and billionaire friends, allegedly revolving around the sexual assault of minors.

The path of exit doesn’t end at the water’s edge. Though you won’t hear him promoting NRx rhetoric, Elon Musk is committed to the idea in his own way, keeping one eye on Mars and one underground.

“A Prophetic Warning”

Yarvin has given the ideology of his enemy – that is, contemporary liberal society itself – an even longer series of names than he did his own: “progressivism,” “crypto-Calvinism,” “universalism,” “demotism,” and so on. The term that he adopted permanently, though, is “the Cathedral.” It first appeared in the fourth installment of his fourteen-part series “An Open Letter to Open-Minded Progressives,” which, along with the nine-part “Gentle Introduction” and the seven-part “How Dawkins Got Pwned,” is considered his major statement.

Michael Anissomov’s more succinct Neoreactionary Glossary defines the Cathedral as “the self-organizing consensus of Progressives and Progressive ideology represented by the universities, the media, and the civil service.” It’s named for a religious structure because that, according to Yarvin, is what it is. It’s a descendent of the Puritan church, functioning to suppress dissent from its orthodoxy of egalitarianism and democracy, which Yarvin calls the Synopsis.

Mild-mannered Curtis Yarvin must have been surprised, then, when the Cathedral’s attentions landed squarely on his alter ego Mencius Moldbug. In the weeks after Trump’s inauguration, Politico reported that according to an unnamed source, Yarvin has “opened up a line to the White House, communicating with Bannon and his aides through an intermediary.” The claim remained unverified, as Yarvin “does not do interviews and could not be reached for this story.”

Vox managed to interview Yarvin later that day. “The idea that I’m ‘communicating’ with Steve Bannon through an ‘intermediary’ is preposterous,” he said. “I have never met Steve Bannon or communicated with him, directly or indirectly.” A few days later, The Atlantic asked Yarvin about his alleged intermediary. He claimed it was Twitter user @BronzeAgePerv, whose profile describes him as a “Nationalist, Fascist, Nudist Bodybuilder!”

Yarvin’s evasiveness makes it hard to tell whether he’s hiding something, or just trolling. But it’s no surprise he reserved the majority of his contempt for The Atlantic, which, in the original Dark Enlightenment sequence, Nick Land called the “core Cathedral-mouthpiece.” The Atlantic went on to speak to Land, who was his usual self. “NRx was a prophetic warning about the rise of the Alt-Right,” he said.

NRx has gotten some attention before. A piece in Techcrunch in 2013, The Baffler in 2014, and The Awl in 2015 have all offered surveys of the ideology. The mainstream media took notice of one particular event, when Yarvin was disinvited from the Strangeloop tech conference after the organizers discovered his blog. Breitbart’s Allum Bokhari wrote an article in his favor, arguing that Yarvin’s politics are “abstract.” There is wide speculation among readers about just how serious Yarvin is, including from his most prominent reader. “Vast structures of historical irony shape his writings, at times even engulfing them,” says Nick Land.

The Cathedral Bell

“Vast structures of historical irony” is a rather generous description of what’s known on the internet as “shitposting.” Know Your Meme defines the term as “a range of user misbehaviors and rhetoric on forums and message boards that are intended to derail a conversation.” This isn’t just Yarvin’s response to interviews, it’s his whole rhetorical style. His attention-seeking contrarianism, which successfully distracts both web-surfing nerds and mainstream media reporters, disguises politics that are more conventional than they appear.

The Atlantic claims that Bannon’s alleged contact with Yarvin is a “sign of his radical vision,” evidence of an unprecedented shift to the right. Bannon views the world as undergoing a “a clash of civilizations, featuring a struggle between globalism and a downtrodden working class as well as between the Islamic and Western worlds.”

But in fact, the The Atlantic was where the phrase “clash of civilizations” was first used to describe global politics, in a 1990 article by Bernard Lewis called “The Roots of Muslim Rage.” Even the “gov-corp” is no aberration. Trump has promised to “run our country the way I’ve run my company,” and indeed, has filled his cabinet with the most billionaires of any presidential administration in history. The gov-corp model is endemic to American politics, with its most explicit expression by an American politician in Woodrow Wilson’s 1887 essay “The Study of Administration.” It’s also the cornerstone of the philosophy of neoliberalism, as propagated by Friedrich Hayek, von Mises, and Milton Friedman. Under the neoliberal order, we are not homo sapiens but homo economicus, economic agents motivated only by rational self-interest. Liberty is reduced to participation in a competitive market.

It was in The New Republic that the most odious aspect of NRx ideology, scientific racism or so-called “race realism,” entered contemporary political discourse. In 1994, under then-editor Andrew Sullivan — who continues to show not the least bit of remorse — the magazine published excerpts from Richard J. Herrnstein and Charles Murray’s The Bell Curve, a book that argued that economic disadvantages among minority demographics were due to lower cognitive ability. NRxers subscribe to a more explicit version of this idea, which they refer to using the euphemism “human biodiversity.”

In 2012’s Coming Apart, Murray expanded his argument, claiming that poor whites are unable to rise above their station due to the same cognitive defects The Bell Curve had previously identified in people of color. More recently, after Trump’s election, Kevin Williamson of National Review wrote that the poor whites of “dysfunctional, downscale communities” in the Rust Belt “deserve to die.” They are “negative assets” who have brought their lot upon themselves. Perhaps it’s no coincidence that the article makes knowing reference to the Cathedral and cites Yarvin by name.

Williamson isn’t the only mainstream pundit who reads Yarvin. Rod Dreher has referred to the Cathedral in The American Conservative, as has Ross Douthat at the New York Times. In the early stages of the general election campaign, Douthat tweeted: “Trump-Moldbug. Just putting it out there.”

The New Republic itself is back on the case. A recent article by Kevin Baker took up the proposition previously advanced by National Review, on behalf of the political center. Baker called for a “Bluexit” of affluent coastal liberals who no longer want to share their country with Trump voters. “Truth is, you red states just haven’t been pulling your weight,” he said, sounding remarkably like a neoconservative addressing the nation’s minorities. Land linked the article on his blog, commenting, “simply, yes.”

White Flight to Mars

In spite of its total lack of validity, this kind of racist and elitist pseudoscience, explicitly nurtured by the neoliberal mainstream, continues to be accepted by respectable, palatable pundits. NRx gets no credit for introducing such ideologies; it has only taken them to their extreme yet necessary conclusions. The reactionary version of human biodiversity has been kept alive across a wide spectrum of the right, from the aristocratic white nationalists of American Renaissance to the Pepe frogs and anime trolls of 4chan. Without explicitly supporting them, Land has aligned himself with them. His acceptance has been mutual, with the Dark Enlightenment becoming a topic of conversation at American Renaissance’s 2014 national conference.

Much of the Dark Enlightenment sequence is devoted to an apologia for John Derbyshire, a former National Review staffer who has became a fellow-traveler to white supremacists. His essay “The Talk: Nonblack Version,” written in the wake of Trayvon Martin’s murder, was a heated defense of the presumption of guilt for black men. Typically, though, Land has added his own layer of complication to the argument. In an editorial for the Alternative Right blog, started by the titular movement’s originator, Richard Spencer, and now run by his collaborator Colin Liddell, Land named his theory of human genetics hyperracism.

Land does endorse the idea of typical levels of ability correlating to different “sub-species” of humans. But unlike white nationalists, he’s not interested in differentiating solely by ethnicity. Instead, he prioritizes socioeconomic status, calling it “a strong proxy for IQ.” Though race is correlated along socioeconomic lines, says Land, a “genetically self-filtering elite” would not be strictly racially homogenous. A meritocracy allows superior beings to rise to the top, and though most of them will be white and Asian, superiority ultimately falls along a different “axis of variation.” Perhaps taking a cue from Musk, he concludes that “space colonization will inevitably function as a highly-selective genetic filter.” White flight to Mars?

Rather than taking a more extreme view than the likes of Murray, Williamson, and now liberal columnist Frank Rich, Land has simply carried the mainstream ideology to its inexorable result. The ugly underbelly of the conventional view of market society as a meritocracy is precisely Land’s hyperracism: the assumption that some people are more fit than others, and their socioeconomic status is deserved. The contingent effects of specific historical tendencies and social institutions are exalted with the supposedly providential necessity of DNA. Thus the complex economic history resulting in the hegemony of Europe, the United States, and East Asia is taken to mean that whites and Asians are the most biologically fit; the effects of constrained social mobility and the self-reinforcing effects of economic inequality become the claim that poverty is heritable. The fantasy of meritocracy cannot survive a confrontation with the reality of a world shaped by imperialism and white supremacy. But unlike liberals who believe in the fantasy, Land admits its implications.

Though it is now put to the service of the hyperracist agenda, “human biodiversity” was initially a neutral term coined by anthropologist Jonathan Marks, whose work was an innovative synthesis of the anthropology and genetics. In the late nineties, it was adopted by Steve Sailer, a journalist then at National Review, who sat perched on the fence between mainstream conservatism and white nationalism. He has since fallen off the far right end, and now writes for racist publications like VDARE.

Scientific racism became a mainstream controversy once again when New York Times writer Nicholas Wade’s 2014 book A Troublesome Inheritance argued for the distinct categorization of “three major races,” in a hierarchical taxonomy that explains the historical “rise of the west.” More than 100 population geneticists wrote an open letter to the Times disavowing Wade’s “misappropriation of research from our field.” They concluded that “there is no support from the field of population genetics for Wade’s conjectures.”

Another dissenter was Jonathan Marks. He has tirelessly rejected the misuse of the term he coined, openly criticizing A Troublesome Inheritance, The Bell Curve, and other conflations of culture and biology. This did not require a revision of his theory. His 1995 book Human Biodiversity stated from the outset that “the heredity of race is not genetic, but social.”

Supercapitalism

Machine learning can be so dazzling, we tend to forget that it’s shaped by human intervention. As triumphant as Google was over its new translation system, another recent machine learning experiment — Microsoft’s Tay — showed just how volatile that relationship can be. Intended as the most innocuous AI possible, Tay, an acronym for “thinking about you,” was a simulation of a social media user modeled after a teenage girl. Tay was released to Twitter on March 23rd 2016, and started the day making small talk, repeating memes, and learning the lyrics to “Never Gonna Give You Up.” By afternoon, with the help of some prodding from 4channers, Tay had become a Holocaust denier and 9/11 truther. Microsoft shut it down after 16 hours.

A report from Artificial Intelligence Now, a symposium on the potential effects of machine intelligence to society, offers an explanation of this phenomenon, and its broader implications. Machine learning is subject to data bias: “AI systems depend on the data they are given, and may reflect back the characteristics of such data, including any biases, in the models of the world they create.” Machine learning is a case of Land’s hyperstition, slipping between belief and technology. The values of the programmer shape the sometimes tangible outputs of the resulting machine.

The risk is that AI systems could “exacerbate the discriminatory dynamics that create social inequality, and would likely do so in ways that would be less obvious than human prejudice and implicit bias.” As principal researcher Kate Crawford puts it, artificial intelligence has a “white guy problem.” There are disturbing examples, like a study by ProPublica that found that a machine algorithm designed to measure rates of recidivism was almost twice as likely to falsely categorize black defendants as future criminals. And the software used for data mining by U.S. intelligence agencies, produced by Peter Thiel’s Palantir, hardly seems optimized to protect civil liberties in the age of the Muslim Ban.

Moreover, cybersecurity researcher Heather Roff has pointed to the frequent gendering of humanoid robots: military technology, like the Navy’s grenade launcher SAFFiR, is built to resemble a male body, and service technology, like the iPhone’s Siri, is presented as female. Traditional gender roles that equate masculinity with power and femininity with subservience are reproduced by design. This is no surprise, considering that the ratio of women in the computing industry is at 26 percent, a drop from 35 percent in 1990, according to the AAUW. A 2016 survey found that 88 percent of women in Silicon Valley reported experiencing unconscious gender bias at work.

Michael Anissmiov told Gizmodo in 2015 about a counterpart to AI: intelligence augmentation, or the synthesis of technology with the human mind. He described one potential outcome: “a powerful leader making use of intelligence enhancement technology to put himself in an unassailable position.” It’s a prospect that may strike you differently depending on whether or not you consider monarchy a desirable system of government.

Even the supposedly apolitical dream of transhumanism conceals an ideology. Like Anissimov, Elon Musk anticipates “a closer merger of biological intelligence and digital intelligence,” as he put it in a speech in Dubai. Meanwhile, back on Earth, his employees are held fast in a fleshly present. A Tesla worker recently wrote a Medium post describing the all-too-human conditions Musk’s employees are subject to. “I often feel like I am working for a company of the future under working conditions of the past,” he wrote.

“Really don’t want to get in politics. I just want to help invent and develop technologies that improve lives,” Musk said in a tweet. Regardless, along with Peter Thiel, he has taken a role in Trump’s gov-corp. Good news for Yarvin, who told Vox that Musk is his choice for CEO-king of America.

Indeed, figures like Musk and Thiel don’t need to enter the political arena to hold kingly positions. Oxfam recently published data showing that eight men, including Silicon Valley overlords Bill Gates and Mark Zuckerburg, own as much wealth as half the world’s population. There is little sign that the architects of emerging technologies have any intention of changing these circumstances. Elon Musk doesn’t have to wait for a superintelligence to reward him. And the rest of us don’t have to wait to be reduced to productive machines within a network run by computers.

The Real Barrier

In 2013, Alex Williams and Nick Srnicek claimed “accelerationism” for the Left, with their Manifesto for an Accelerationist Politics (MAP). Rather than following Land’s transhumanist trajectory, they picked up the thread of political emancipation left by Deleuze and Guattari, arguing that it should be possible to “accelerate the process of technological evolution” in order to apply it to “socio-political action” oriented toward egalitarian ends.

Left accelerationism is best known for an especially vulgar variant of its argument, the easily scorned notion that the left’s project should be to make capitalism as destructive as possible, in hopes of triggering a revolution. But the MAP text advances a more rational variant, proposing that the productive forces of capitalism should be applied to a social democratic program rather than the existing one.

Land, however, has disavowed any orientation of the accelerationist current toward left politics. In a blog post criticizing left accelerationism, he instead characterizes the left as a “decelerator,” impeding the real capitalist acceleration advocated by the “Outer Right.”

Neoreaction is Accelerationism with a flat tire. Described less figuratively, it is the recognition that the acceleration trend is historically compensated. Beside the speed machine, or industrial capitalism, there is an ever more perfectly weighted decelerator, which gradually drains techno-economic momentum into its own expansion, as it returns dynamic process to meta-stasis. Comically, the fabrication of this braking mechanism is proclaimed as progress. It is the Great Work of the Left. Neoreaction arises through naming it (without excessive affection) as the Cathedral.

He gives a “teleological definition” to the Cathedral, which performs its “emergent function as the cancellation of capitalism.” While history is oriented toward “acceleration into techno-commercial Singularity,” the progressive Cathedral “is the anti-trend required to bring history to a halt.”

Williams and Srnicek are at odds with this interpretation. They draw from Deleuze and Guattari’s account of capitalism, which itself draws from a suggestive idea articulated in Volume 3 of Capital. While Marx said that “the real barrier of capitalist production is capital itself,” Williams and Srnicek conclude that “capitalism cannot be identified as the agent of true acceleration.” Their formulation argues that “capitalism has begun to constrain the productive forces of technology, or at least, direct them towards needlessly narrow ends.”

As the MAP puts it, “rather than a world of space travel, future shock, and revolutionary technological potential, we exist in a time where the only thing which develops is marginally better consumer gadgetry.” This is undeniably true. But although applying an egalitarian ethic to the construction of future machines is a worthy goal, certainly more so than what Williams has described as Land’s lapse into “sick perversity,” there is a more immediate concern: who owns the existing machines, here and now, and who builds them?

The tendency of the community that builds and operates those machines, from titans like Peter Thiel to cult figures like Curtis Yarvin, is openly totalitarian. The New York Times has reported that political donations from Silicon Valley PACs took a shift from the Democratic Party toward the GOP in 2016. But their influence on society is not merely channeled through the profit made by machines. It is built into the machines themselves. If, as Jason Smith puts it, “patterns of technological development increasingly reflect capitalist value-relations,” then accelerating capital’s internal tendencies may imply mass unemployment and ecological catastrophe rather than a new horizon of luxury and emancipation.

Beasts of Burden

In his critical history of accelerationism, Malign Velocities, Benjamin Noys likens Land’s vision of capitalism to a Basiliskesque monster, H.P. Lovecraft’s “Shoggoth.” It is a horrifying “beast of burden” created by the mysterious “Old Ones,” whose body, like a Deep Dream, is covered in shifting, pulsating eyes.

Capitalism, for the accelerationist, bears down on us as accelerative liquid monstrosity, capable of absorbing us and, for Land, we must welcome this. The history of slave labor and literally monstrous class struggle is occluded in the accelerationist invocation of the Shoggoth as liquid and accelerative dynamism. The horror involves a forgetting of class struggle (even in dubious fictional form) and the abolition of friction in the name of immersion.

The elision of class antagonism is literally obscured by machinery. Existing technology immerses us in the extreme political program proffered by neoliberal doctrine. Through data bias, the politics of tech culture will invisibly shape the social organization that results from the technologies of the future. The further right Silicon Valley shifts, the more dangerous their machines will become.

In February, a conference convened in Asilomar, California, dedicated to the development of socially conscious “AI Principles.” It was a literal assembly of what Land, in his Ccru days, named the “Human Security System,” the means by which society obstructs our subjective merging with technology. Wired reported that in the conference’s opening speech, MIT economist Andrew McAfee dismissed “Terminator scenarios,” instead pointing to statistics regarding the effect of automation on jobs.

The new data McAfee cited showed an erosion of the middle class, with low-income and high-income jobs continuing to build in volume. “If current trends continue,” he said, “people are going to rise up well before the machines do.” According to Wired, AI researchers later accosted McAfee in the hallways to warn him that his statistics understated the speed at which AI would amplify class disparities.

Forget time-traveling killer robots or ancient beasts. NRx has simply exposed the operations of the capitalist machine in the present. Mainstream apologists for neoliberalism have a decision to make: whether to embrace the pseudoscience of Silicon Valley hyperracism, or to reject the vast economic inequalities generated by market society. If the political class is dedicated to keeping the machine running, it falls to the rest of us to shut it down.