In my junior year at Stanford I started inviting David Pearce (IEET Fellow, who advocates the “abolition of suffering”) to give talks there. I was able to hang out with him over the course of many afternoons, and even had the chance to interview him once. He made me aware of the causal relevance of the Stanford Transhumanist Association (it is a force multiplier, memetically speaking), and solidified my commitment to prioritizing suffering over everything else.

In addition to that, we talked at great lengths about philosophy of mind. This is a thorny subject, you see. In the rationalist community the most prevalent philosophy of mind is (neuron-resolution-level) functionalism. I used to have the strong view that functionalism was the only possible theory of mind (this lasted for four years), so I know what motivates this particular theory of consciousness.

After discussing this with David Pearce for tens of hours, I was finally able to perform the mental moves required to actually grasp the problem with this view. Growth mindset is key here. If at first his arguments don’t seem relevant, it may be because you are not representing the question properly. Just getting your mind to ask the questions properly (rather than just saying them) is half the battle.

I had no option but to revisit my background philosophical assumptions. In the transhumanist community, especially within the inner circle of people who are philosophically opinionated and highly literate, my current philosophy of mind is equated with psychosis- or worse, wishful thinking. I must emphasize: I used to be a functionalist who saw no problem with AGI implemented in supercomputers. Now I see a big problem, and the problem is phenomenal binding.

Phenomenal Binding. If you don’t grasp at first what those two words refer to, keep trying. Why? Because the referent of “phenomenal binding” can only be grasped by you, in a looking-inward sort of way. What is an example of phenomenal binding? Look at how your left and right visual fields form one visual field. And that one visual field can also be divided into top and bottom. Is there any “natural division”? Is your visual field made of “a right and a left part” or “a top and a bottom part”?

In reality, your visual field does not have parts. It is we who point at different regions and call them parts. But unlike the parts of physical objects, changing any part of your visual field ontologically modifies the nature of the entire visual field. The bottom-left leg of a chair does not change when you cut off the top-right leg of the chair. But if you experience blue in the left side of your visual field, your whole visual field is now one where “the left side is blue and the right isn’t”.

The right side, then, is “the right side of a visual field that has blue at the left.” And in this sense, both parts are somehow being simultaneously connected to each other by being together a bigger whole. This particular union is what we call phenomenal binding: the binding of the various qualities of your experience into a geometrically arranged system or relationships that shows itself as a unity. Most people I talk to fail to grasp the meaning of this unity, and are skeptical that there is anything real to the unity of consciousness. This, I think, is the result of implicit background assumptions about the nature of consciousness, which prevent realizing certain insights.

I came to the conclusion that ending suffering is the most important thing by contemplating ‘oneness.’

The idea of taking your values seriously is something that I have played with from an early age. That said, for the greater part of my life, I was a committed Classical Utilitarian. In other words, I thought that it was ethically permissible (and indeed, in some cases, required) to have some entities experience suffering so that a much, much larger number of them could experience bliss. In fact, we already implicitly accept this sort of tradeoff when we plan for our own life. As an example, you may decide that experiencing mild anxiety when asking a handsome person out for a drink is worth it overall.

One useful trick is to ask yourself “would I be willing to experience X so that I could experience Y?” Now, in the normal range of human experience, it is actually sensible to experience some genuinely negative states in order to experience further bliss later on. But there is a horrible bias here: When we decide tradeoffs for ourselves, we can only sample our own limited repertoire of felt sorrows and wonders. As it turns out, the state-space of possible conscious experiences is larger than you can conceive (this is a fact). And unfortunately, there exist entire landscapes of possible experiences that, if you were ever to feel a single second of them, you would conclude that life as a whole was not worth the trouble.

There is a threshold of negative hedonic tone below which no intellectualizing can possibly justify its existence. And, unfortunately, I needed to be made aware of the depth of hellish reality before I recognized the ethical imbalance between maximizing bliss and minimizing suffering.

Now, I would like to use this chance (yes, in this very paragraph) to mention that my views on consciousness and the ethical primacy of suffering are not interdependent. If Strawsonian physicalism is false, and say, dualism or materialism, or theism, or whatever, happens to be the correct model, that would not change by one speck my uncompromising commitment to eliminating suffering. Theories of consciousness, however, have a tremendous effect on the specific approach that people take to accomplish their goals. And transhumanism and consciousness are for that reason intimately connected:

Consciousness is not an irrelevant subject for people interested in transhumanism. In particular, the topic tends to come up specifically when we discuss things such as mind-uploading and the digital replication of someone’s psychological makeup. In brief, depending on your views on consciousness, you may either judge a given outcome to be equivalent to “living millions of years” or “being murdered by the mind-uploading procedure.” Understanding the causes of consciousness is key for speculating on (and indeed constructing) possible futures, and this realization is only the beginning.

You need to actually solve the problem, or trust that your tribe solved the problem for you. Since tribes are incompetent at philosophical matters, you can really only rely on yourself. Will you yourself mind upload? Will you teleport? Will you mind-meld? Will you conduct a Generalized Wada Test? Will you divide your brain into pieces and combine them with others? When, if at all, would you stop existing, would you stop being “you”, would you stop being in control of your future?

In order to answer those questions you first need to have an opinion you can believe in regarding all of these ideas: phenomenal binding, integrated information theory, functionalism, dualism, panpsychism, quantum mind theories, Open Individualism, etc.

If you don’t think about these questions, you are at risk of being killed by bad philosophy.