POSSIBLE RESPONSE

In a post-ageing world centuries hence, reproduction will need to be exceptionally rare and centrally-controlled - regardless of whether or not our quasi-immortal descendants practise hedonic engineering. Otherwise the Earth (or in theory our galaxy or local galactic supercluster, etc) will exceed its physical carrying capacity. However, this kind of speculation involves very complex arguments on the nature of selection pressure in an era when traditional childbearing has more-or-less ceased.

In the meantime, there will be intense selection pressure, but there are powerful grounds for believing such selection pressure will work against any genotypes/allelic combinations predisposing to Darwinian unpleasantness in all its forms. This is because we are on the brink of a reproductive revolution of designer babies. Prospective parents will shortly be choosing the personalities/genetic make-up of their future children rather than playing genetic roulette. As responsible child-planning becomes common, and preimplantation genetic screening becomes routine, severe selection pressure will come into play against genes/genotypes predisposing to the darker modes of human experience. This isn't the place to attempt formal game-theoretic modelling or a treatise on posthuman population genetics. So for illustrative purposes just imagine: If you were a prospective parent choosing the genetic make-up of your future children, what genetic dial-settings would you opt for? You wouldn't want genotypes predisposing to anxiety disorders, depressive illness, schizoid tendencies, and other undisputed pathologies of mind; but how high (or in theory, how low?) would be the settings you'd prefer for your children's normal hedonic tone? Cross-culturally, parents typically say they want their children to be happy, albeit "naturally" so; but how happy? Redheads may prefer to have red-headed children; but few depressives will want depressive children. All that's needed for selection pressure to get to work here is a partially heritable slight preference for children who are modestly more temperamentally happy [or less gloomy] than oneself. Selection pressure is fundamentally different when evolution is no longer "blind" and random with respect to what is favoured by natural selection - i.e. when genes/allelic combinations are chosen/designed in anticipation of their likely effects. Such selection pressure is already manifest in non-human domestic animals; it will shortly come into play in humans. Hence we are entitled to speak of an impending post-Darwinian era - not because selection pressure will be absent (on the contrary!) but because we are poised to switch from the era of "natural" to "unnatural" selection.

This momentous reproductive shift certainly doesn't exclude the likelihood of continuing selection pressure against some modes of subjective well-being e.g. undifferentiated bliss. Thus wireheads and their natural analogues, for instance, will presumably always be at a reproductive disadvantage. But a motivational system of high-functioning gradients of superhappiness may be extremely adaptive if that's the behavioural phenotype we want for our children. Children genetically predisposed to be abundantly happy and affectionate are more rewarding to raise than surly, depressive children. It should be stressed that this optimistic scenario doesn't mean that posthuman social life will resemble a communal hug-in or an MDMA-driven rave. There can be functional analogues of depressive realism even in paradise.



9) The RISKS OF HASTE objection

The priority should be superintelligence, not superhappiness. Only after we are intelligent enough to understand the implications of what we're doing should we explore radical mood-enrichment. The risks of acting prematurely and building a fool's paradise are too great.

POSSIBLE RESPONSE

As it stands, this objection may well be correct. Only superintelligence can maximise the utility function of the universe. But emotional enrichment - as distinct from crude pleasure-amplification - is itself presumably a critical ingredient of superintelligence. So we should take care to avoid constructing a false dichotomy: mature superintelligence will presumably entail an unimaginably enriched capacity for empathetic understanding - a "God's eye view". This point is relevant because - given some fairly modest assumptions and even the slightest sense of moral urgency - we should be prepared, if necessary, to take risks to eliminate a terrible scourge, to prevent suffering and cruelty to our fellow creatures, or to act when the risks of inaction are greater than action. What's important is assessing risk-reward ratios. One obvious parallel is ageing. Bluntly, we are all dying. If you regard ageing as a horrible disease, then you may be prepared to run risks to retard its progression. Thus one might take a daily cocktail of supplements (e.g. resveratrol, selegiline, etc) that increases lifespan and life expectancy in "animal models", but whose efficacy and long-term safety is unproven in controlled longitudinal studies in humans. Perhaps the minority of "healthy" [i.e. dying] humans who adopt such a regimen misjudge the risk-reward ratio involved; but if so, the error doesn't reside in a willingness to take calculated risks - merely in their miscalculation. There are perils in inertia no less than in initiative. Likewise, current victims of intractable pain or chronic depression, whose quality of life is meagre (or worse), may justifiably take more therapeutic risks, and explore more experimental treatments, to alleviate their distress than the psychologically robust who already enjoy life to the full - by mediocre Darwinian standards, at any rate.

A complication of this analysis is that all enhancement technologies may be viewed as remedial therapies by the enlightened standards of our successors. Yet there is a fundamental difference between taking risks to alleviate serious disease, chronic pain syndromes or prolonged psychological distress and taking risks to enhance pre-existing well-being.

Sadly, there aren't any short-cuts. So in that sense the objection is unanswerable. Current recreational euphoriants, for instance, may give their users a faint, fleeting, shallow foretaste of posthuman bliss; but for the most part, they activate the hedonic treadmill - and produce nasty side-effects, insidious or otherwise. It's worth recalling that some very smart people have been seduced. Twenty-eight-year-old Viennese neurologist Dr Sigmund Freud wrote a paean of scholarly praise for the therapeutic benefits of cocaine, newly isolated from the coca plant. Bayer introduced Heroin as a non-addictive remedy for coughs. And in the words of one intravenous heroin user: "It's so good. Don't even try it once." Any potential wonderdrug or gene-therapy that promises a miraculous breakthrough to posthuman nirvana needs to be investigated with both extraordinary urgency and extraordinary scepticism.



10) The CARBON CHAUVINISM Objection

This talk has focused on enriching the "biological substrates" of emotion. Yet given some quite widely accepted functionalist arguments in contemporary philosophy of mind, why not scan, digitize, and "upload" ourselves into silicon or another medium - and then reprogram ourselves? The exponential growth of computing power promises to endow uploads with the self-reprogramming ability to cure ageing, infirmity and disease; attain true superintelligence; enjoy total morphological freedom; and amplify our reward pathways too. If the exponential growth of [inorganic] computer power continues unchecked, then this transformation may be only decades away - not the millennia that a meatware transition to posthumanity would presumably entail.

POSSIBLE RESPONSE

The range of opinions among transhumanists on uploading runs all the way from those who think it's inevitable to those who view it as some kind of millennialist death cult. If your overriding ethical goal is "merely" to eradicate suffering, then uploading could almost certainly achieve its abolition - one way or the other. However, most people aren't negative utilitarians. If you want "your" upload to achieve supersentience as well as superintelligence, or to enjoy posthuman levels of well-being, to achieve quasi-immortality, or simply to conserve your identity as understood today, then the existential risk posed by uploading is immense - perhaps the biggest existential risk the human species has ever contemplated. So before embarking on anything so revolutionary, it's vital that we have a compelling theory of consciousness - and a mathematically exact description of its myriad textures - on pain of creating zombies. Maybe you feel 99% certain that the sceptics are wrong e.g. neurophilosophers who believe that unitary consciousness depends on quantum coherence, and hence any aspiration to non-trivial digital sentience falls foul of the "von Neumann bottleneck". But either way, the postulation of sentience in silico is not a testable scientific hypothesis. So advocates of uploading are placing a lot of faith in a metaphysical theory. Of course, the conviction that anyone else is conscious is a metaphysical theory too, albeit less controversial.

By way of [false] analogy, consider the game of chess. Imagine a misguided philosopher who claims that what matters when playing chess is not just the sequence of moves, but also the particular textures of the individual chess pieces; and that chess games played with wooden or metal pieces, say, or games played online via computer, can be different in character even if the sequence of moves played is the same. Surely, we would say, this fellow is simply confused: he is missing the point of chess. The particular textures of the pieces, and even the complete absence of any such textures in computer chess matches, are unimportant, since the textures, coloration, and physical composition (etc.) of the pieces are functionally irrelevant to the gameplay - a mere implementation detail. The same game of chess can be multiply realised in different physical substrates. Now consider uploading. Imagine again a naïve-sounding bioconservative who insists that what matters for successful uploading is not just the behaviour [and behavioural dispositions] of hypothetical uploads, but also the particular textures [aka qualia: "what it feels like"] of their mental-cum-perceptual states. Now in one sense, yes, the phenomenal textures [if any] and substrate composition of a hypothetical upload are mere implementation details - functionally irrelevant insofar as the upload has the right functional architecture to support input-output relations identical to its meatworld counterpart. ["If it walks like a duck, quacks like a duck...", etc.] Yes, if we were exhaustively defined by our behavioural patterns, then the spectre of inverted qualia, "Martian pain", absent qualia, and so forth, is of no consequence. But in another, critically important sense, the analogy with chess fails. "What it feels like" to be me is of the very essence of my personal identity: it's not a trivial implementation detail, but definitive of who one is - one's intrinsic nature. If we had the slightest idea how to scan, record and digitise qualia, then uploading might be feasible; but alas we don't. It is scarcely possible to overstate our scientific ignorance of consciousness. For now, at least, uploading belongs to the realm of science-fantasy rather than science-fiction.

However, let's assume for the sake of argument that sentient uploading will in future be technically and societally feasible - perhaps using quantum computers with a non-classical architecture. Given a mass-upload scenario, the fate of meatware "left behind" is unclear. Unless traditional organic life is to be liquidated - i.e. "destructive" uploading, the final solution to the organic life problem - then primordial Darwinian organisms will still need to be "rescued" by their postorganic descendants. So here we come back to the biological substrates of consciousness with which we began.





CONCLUSION

Superintelligence, Superlongevity and Superhappiness?

But by how much? Unlike computing power, an exponential growth of happiness is (presumably) impossible, short of technologies beyond human imagination. Yet securing even an approximate linear growth of its biomarkers would represent a stunning discontinuity in the history of life to date. Posthuman versions of the Goldilocks zone - "not too hot, not too cold" - could potentially exceed the hedonic range adaptive for our hominid ancestors by several orders of magnitude, if not more. Will our posthuman descendants eventually decide, to echo Bill McKibben, "Enough!". Possibly; but if so, it's unclear how, when and why.

It's worth emphasising that the sorts of scenarios for posthuman mood-enrichment explored here aren't, for the most part, an alternative to other transhuman scenarios of our future, notably superintelligence and superlongevity. On the contrary, a fine-grained control of our emotions together with motivational enhancement should enable us, other things being equal, to realise these scenarios more effectively - and to savour their outcome all the more appreciatively. Nor is hedonic enrichment some kind of prescription for how to live posthuman life - any more than being cured of a chronic pain condition dictates how one should lead a pain-free existence. "The world of the happy is quite different from that of the unhappy" observes Wittgenstein in the Tractatus. Yes, and the world of the superhappy is quite different from the human world. Whether we'll ever investigate its properties, however, is an open question.