From the days of the Acheulean hand-axe on, humans have always had a symbiotic relationship with technology. How far will that relationship go? One haunting vision of the future is provided by the Borg — one of the main villains of the Star Trek universe.

Or at least that seems to be the view of the Star Trek writers. But maybe the Borg aren’t that scary? And maybe they aren’t some distant future possibility? These are questions that David Gunkel asks in his article ‘Resistance is Futile: Cyborgs, Humanism and the Borg’. Gunkel defends two main claims in this article. The first is that, contrary to what you might have thought, we are already Borg: certain technological, cultural and philosophical developments have sufficiently blurred the line between human and machine for us to count as cyborgs. The second is that cyborgisation may not be as great a threat as the Star Trek writers presume. Gunkel supports this by identifying three possible responses one can have to the reality of cyborgisation, only one of which echoes the threat-narrative of the Star Trek writers.

I want to evaluate Gunkel’s arguments in the remainder of this post. Although I am broadly sympathetic to his claim that we are already Borg (or, at least, that we are heading in that direction), I’m less sure about the claim that this need not be all that threatening. To give a quick precis of my argument: I think Gunkel’s assessment focuses on a red herring. The problem with the Borg is not that they undermine humanism and humanistic values, as he seems to suggest, but rather that they undermine individualism. Although individualism is often subsumed within humanism, the two are separable and should be kept separate for the purposes of this debate.

1. Three Ways in Which we are Already Borg

Gunkel argues that there are three ways in which humanity has already been cyborgised. Let’s go through each of them.

The first way in which we have already been cyborgised is that some of us — possibly the vast majority of us — have become technical or physical cyborgs. That is to say, we have directly integrated technological artifacts into our biological systems. The most obvious example of technical cyborgisation comes from the use of prosthetic devices, such as artificial limbs. Although such prosthetics have been around for a long time, they are now becoming more functional and more impressive. Take the case of Leslie Baugh. He was injured in an industrial accident and had both his arms amputated at the shoulder. He has now been provided with two robotic arms. These are directly integrated with his nervous system and allow him achieve near-natural motor function. Check it out in the video below:

It’s pretty clear to me that with these prosthetic limbs Baugh is a technical cyborg. Indeed, he is aesthetically very close to the standard science fictional representation of a cyborg. The precise number of people living as such technical cyborgs is unclear, but Gunkel cites one estimate suggesting that 10% of the U.S. population have such devices.

This is arguably a low-ball estimate. Gunkel continues the argument by claiming that pharmacological interventions are a type of cyborg technology. A good example of this is the technology of immunisation. This practice results in the long-term reprogramming of the human immune system. If we include such interventions in the definition of technical cyborgisation, then the number of technical cyborgs is much higher. Indeed, there are probably few, if any, who fail to meet this definition.

The second way in which we have already been cyborgised is metaphorical, rather than technical. At least, that’s the term Gunkel uses to describe the phenomenon. Metaphorical cyborgisation effectively involves the extended mind hypothesis. According to this hypothesis, we are ‘natural born cyborgs’ because we all extend our mentality and physicality into technological artifacts. We thus form extended functional loops that are not confined to the flesh-bags of our organic selves. The obvious modern example is how people use their smartphones as external memory, cognition and sensory devices. I have written about this form of cyborgisation at much greater length on previous occasions, so I won’t belabour it here.

The third way in which we are already cyborgs has to do with the philosophy of ontology. This is the branch of philosophy concerned with what kinds of things exist (and the nature of existence more generally). One important aspect of ontology concerns the classificatory boundaries between different kinds of things. What distinguishes my cat from my dog? My table from my chair? My left hand from my right hand? And so on. Gunkel — endorsing a thesis originally defended by Donna Haraway — claims that we are ontological cyborgs because the classificatory boundaries between humans and other entities have become increasingly blurry in the recent past. Haraway argued that this was true in at least two respects. First, the boundary between humans and animals is much less distinct than it used to be. Capacities such as sentience, rationality, problem-solving, morality have all traditionally been taken to be unique to humanity, but many argue that such capacities can be (and are) shared by animals and, perhaps more importantly, are not obviously shared by all humans either. Thus the boundary between the human and animal has been deconstructed. Second, the same has happened with the boundary between human and machine. More and more machines are capable of doing things that were once thought to be uniquely human.

This ontological form of cyborgisation is particularly important for Gunkel’s argument. Although the other forms of cyborgisation hint at it, this third form really underscores the fact that we are in an ‘unstable ontological position’. As he puts it:

[W]e have never really been human. We have always and already been Borg, insofar as the differences between human and animal and animal and machine have been and continue to be undecidable, contentious and provisional.

(Gunkel 2015, 5) The diagram below summarises Gunkel’s view.

As I say, I am broadly sympathetic to Gunkel’s argument. I may not be as extreme as he is and I may have some quibbles. For instance, I’m not sure how useful it is to extend the definition of technical cyborgisation to include pharmacological interventions. The consumption of food could be classified as an intervention of this type, which would imply that we are always technically cyborgising ourselves. But if we extend the definition that far then I’m not sure that the concept of cyborgisation has any useful content. We will still probably try to distinguish between these mundane forms of technical cyborgisation and the more robotic/digital forms. But maybe that’s Gunkel’s whole point. Also, I’m a little disappointed that Gunkel doesn’t consider some of the criticisms of metaphorical cyborgisation as well. I’ve done this on previous occasions and I think it does weaken the idea to some extent. Finally, I would also quibble with the extent to which ontological cyborgisation is true. I’m not an essentialist: I don’t think there are essential differences between humans and animals and humans and machines. But I still think it’s possible to draw useful conceptual boundaries between these groups.

All that said, I agree with the basic thrust of Gunkel’s position. Technology has always been pushing us in the direction of cyborgisation, and contemporary cultural and philosophical movements are supporting this push.

2. How should we react to cyborgisation?

The critical question is: what does this mean? Should we worry about our cyborgisation? Should we embrace it? Or should we treat it with a degree of equanimity? Gunkel identifies three main answers to these questions.

The first says that we should worry about cyborgisation because it involves dehumanisation. In other words, it involves the conversion of humans into something non-human. This is disturbing because humans are holders of rights and sources value in modern culture. The Enlightenment philosophy is a philosophy of humanism. It celebrates and protects humanistic qualities. The fear is that cyborgisation will degrade and override these qualities. Cyborgisation consequently constitutes a threat. Popular representations of cyborg lifeforms often support this threat-narrative. This certainly seems to be the case in Star Trek: The Next Generation. Captain Picard seems to embody the values of Enlightenment humanism. His assimilation to the Borg in the episode ‘Best of Both Worlds’ is thus incredibly poignant. It is a pure fictional representation of dehumanisation.

The second answer is epitomised by transhumanism. This is a philosophical and cultural movement that embraces cyborgisation. It does so not because it rejects humanistic values, but because it wants to perfect them. Transhumanists’ problem is that the organic shells in which the ‘human’ sits is imperfect. It is prone to infection, decay and decomposition. It constrains and limits our freedom to do what we want and be who we want. Transhumanists thus seek to use technology to overcome these limitations. In a sense then, their goal is to perfect humanism. They see cyborgisation as the way to do this.

The third answer is more radical. The previous two answers both share a commitment to humanism. They differ merely in whether they see cyborgisation as a threat or opportunity. The third answer calls into question the commitment to humanism. It is the posthumanist answer. Proponents of this answer see the cyborg as neither dystopian nor utopian. Rather, they view it as something which helps to deconstruct the humanistic system of values. One can welcome this deconstruction. As Gunkel points out, the category of the ‘human’ has always been socially contested. Indeed, it is often a tool for social oppression and exclusion:

Despite what one might initially think, the term ‘human’ is not some eternal, universal, and immutable Platonic idea. In fact, who is and who is not “human” is something that has been open to considerable ideological negotiations and social pressures. At different times, membership criteria for inclusion in club “anthropos” have been defined in such a way as to not only exclude but to justify the exclusion of others, for example, barbarians, women, Jews, people of color, etc. (Gunkel 2015, 7)

To the extent that it helps us to avoid such exclusion, cyborgisation may be a welcome phenomenon and may encourage us to transition to an alternative (possibly better; possibly not) system of values.

I would imagine these three answers forming a spectrum or dimensional space. One could oscillate between the various views or occupy ‘in between’ positions. I have tried to illustrate this in the diagram below.

3. Concluding Thoughts

I enjoy Gunkel’s analysis of this topic. I am a sucker for science fiction, particularly Star Trek, and any attempt to fuse science fiction together with some serious philosophical analysis is always bound to appeal to me. That said, I disagree with the focus in Gunkel’s article. Although he mentions the importance of freedom and self-determination to the way in which the Star Trek writers conceptualise the Borg-threat, he chooses to focus on the concept of humanism in his discussion of cyborgisation. I see this as a (slight) mistake.

To me, ‘humanism’ (in the sense that Gunkel seems to use the term) is a philosophy that is concerned with identifying the core properties of human beings and then using some of these properties to ground a system of values. So, for example, the capacities for moral judgment, higher reasoning, sentience and self-awareness might all be deemed uniquely human (what separates us from the animals) and sources of value. Human society should be oriented toward the cultivation of these capacities and toward protecting them from malicious interference. Organisms and objects that do not share these capacities are then excluded from these protections. The processes of cyborgisation that Gunkel identifies do indeed undermine this view because they show how the supposedly-unique properties of human beings can be shared by other organisms and objects. This might be threatening to those with a religious or overly biological understanding of what it means to be a ‘human’, but should be unproblematic to most. Indeed, many self-identified ‘humanists’ have no problem with the idea that ‘human’ rights should be extended to animals that share human capacities.

Humanism, so understood, is distinct from individualism. This is the notion that the source of values and bearer of rights within society is an individual. That is to say, a unique and coherent self that is distinct from other individuals. Humanistic values often incorporate individualistic values — such as the values of freedom and self-determination — but they are distinguishable. I could happily embrace a blurry boundary between the human and the machine (or the human and the animal) while remaining a staunch individualist. I could, for instance, view a chimpanzee as an individual rights bearer, entitled to some degree of freedom and a right to self-determination. I could also, quite happily, view a technical cyborg like Leslie Baugh as an individual rights bearer. The technical and ontological deconstruction of the ‘human’ does not necessarily compromise individualism.

And this, I think, is where Gunkel misses the mark with his analysis of the Borg. The Borg are threatening not because they undermine the philosophy of humanism, but because they undermine the distinct philosophy of individualism. They are threatening because they use technology to deconstruct a unique and coherent self and then assimilate that former self within a more general collective consciousness. The question we should be asking about the ongoing process of cyborgisation is whether it is doing the same thing.