Those whacky extropian types have been hitting the nightmare sauce again. This time, while I was having a life and not paying attention they came up with Roko's Basilisk:



Roko's basilisk is a proposition suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence. According to the proposition, it is possible that this ultimate intelligence may punish those who fail to help it, with greater punishment accorded those who knew the importance of the task. This is conventionally comprehensible, but the notable bit of the basilisk and similar constructions is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you. Roko's basilisk is notable for being completely banned from discussion on LessWrong; any mention is deleted. Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not want discussion of the notion of acausal trade with unfriendly possible superintelligences.

Leaving aside the essentially Calvinist nature of Extropian techno-theology exposed herein (thou canst be punished in the afterlife for not devoting thine every waking moment to fighting for God, thou miserable slacking sinner), it amuses me that these folks actually presume that we'd cop the blame for it—much less that they seem to be in a tizzy over the mere idea that spreading this meme could be tantamount to a crime against humanity (because it DOOMS EVERYONE who is aware of it).

The thing is, our feeble human fleshbrains seem rather unlikely to encompass the task of directly creating a hypothetical SI (superintelligence). Even if we're up to creating a human-equivalent AI that can execute faster than real time (a weakly transhuman AI, in other words—faster but not smarter), we're unlikely thereafter to contribute anything much to the SI project once weakly transhuman AIs take up the workload. Per Vinge:

When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale.

Roko's Basilisk might (for some abstract game theoretical reason) want to punish non-cooperating antecedent intelligences capable of giving rise to it who failed to do so, but would it want to simulate and punish, say, the last common placental ancestor , or the last common human-chimpanzee ancestor ? Clearly not: they're obviously incapable of contributing to its goal. And I think that by extending the same argument, we non-augmented pre-post-humans clearly fall into the same basket. It'd be like punishing Hitler's great-great-grandmother for not having the foresight to refrain from giving birth to a monster's great-grandfather.

The screaming vapours over Roku's Basilisk tell us more about the existential outlook of the folks doing the fainting than it does about the deep future. I diagnose an unhealthy chronic infestation of sub-clinical Calvinism (as one observer unkindly put it, "the transhumanists want to be Scientology when they grow up"), drifting dangerously towards the vile and inhumane doctrine of total depravity. Theologians have been indulging in this sort of tail-chasing wank-fest for centuries, and if they don't sit up and pay attention the transhumanists are in danger of merely reinventing Christianity, in a more dour and fun-phobic guise. See also: Nikolai Fyodorovich Fyodorov.