What was Apple thinking when it launched the iPhone? It was an impressive bit of technology, poised to revolutionise the smartphone industry, and set to become nearly ubiquitous within a decade. The social consequences have been dramatic. Many of those consequences have been positive: increased connectivity, increased knowledge and increased day-to-day convenience.

A considerable number have been quite negative: the assault on privacy; increased distractability, endless social noise. But were any of them weighing on the mind of Steve Jobs when he stepped onstage to deliver his keynote on January 9th 2007?



Some probably were, but more than likely they leaned toward the positive end of the spectrum. Jobs was famous for his ‘reality distortion field’; it’s unlikely he allowed the negative to hold him back for more than a few milliseconds. It was a cool product and it was bound to be a big seller. That’s all that mattered. But when you think about it this attitude is pretty odd. The success of the iPhone and subsequent smartphones has given rise to one of the biggest social experiments in human history. The consequences of near-ubiquitous smartphone use were uncertain at the time. Why didn’t we insist on Jobs giving it quite a good deal more thought and scrutiny? Imagine if instead of an iPhone he was launching a revolutionary new cancer drug? In that case we would have insisted upon a decade of trials and experiments, with animal and human subjects, before it could be brought to market. Why are we so blase about information technology (and other technologies) vis-a-vis medication?



That’s the question that provokes Ibo van de Poel in his article ‘An Ethical Framework for Evaluating Experimental Technology’. Van de Poel is one of the chief advocates of the view that new technologies are social experiments and should be subject to similar sorts of ethical scrutiny as medical experiments. Currently this is not being done, but he tries to develop a framework that would make it possible. In this blogpost, I’m going to try to explain the main elements of that framework.





1. The Experimental Nature of New Technology

I want to start by considering the motivation for van de Poel’s article in more depth. While doing so, I’ll stick with the example of the iPhone launch and compare it to other technological developments. At the time of its launch, the iPhone had two key properties that are shared with many other types of technology:

Significant Impact Potential: It had the potential to cause significant social changes if it took off. Uncertain and Unknown Impact: Many of the potential impacts could be speculated about but not actually predicted or quantified in any meaningful way; some of the potential impacts were completely unknown at the time.

These two properties make the launch of the iPhone rather different from the other quasi-technological developments. For example, the construction of a new bridge could be seen as a technological development, but the potential impacts are usually much more easily identified and quantified in that case. The regulatory assessment and evaluation is based on risk, not uncertainty. We have lots of experience building bridges and the scientific principles underlying their construction are well understood. The regulatory assessment of the iPhone is much trickier. This leads van de Poel to suggest that a special class of technology be singled out for ethical scrutiny:

Experimental Technology: New technology with which there is little operational experience and for which, consequently, the social benefits and risks are uncertain and/or unknown.

Experimental technology of this sort is commonly subject to the ‘Control Dilemma’ - a problem facing many new technologies that was first named and described by David Collingridge:

Control Dilemma: For new technologies, the following is generally true:

(A) In the early phases of development, the technology is malleable and controllable but its social effects are not well understood.

(B) In the later phases, the effects become better understood but the technology is so entrenched in society that it becomes difficult to control.

It’s called a dilemma because it confronts policy-makers and innovators with a tough choice. Either they choose to encourage the technological development and thereby run the risk of profound and uncontrollable social consequences; or they stifle the development in the effort to avoid unnecessary risks. This has led to a number of controversial and (arguably) unhelpful approaches to the assessment of new technologies. In the main, developers are encouraged to conduct cost-benefit analyses of any new technologies with a view to bringing some quantificational precision into the early phase. This is then usually overlaid with some biasing-principle such as the precautionary principle — which leans against permitting technologies with significant impact potential — or the procautionary principle — which does the opposite.



This is isn’t a satisfactory state of affairs. All these solutions focus on the first horn of the control dilemma: they try to con us into thinking that the social effects are more knowable at the early phases than they actually are. Van de Poel suggests that we might be better off focusing on the second horn. In other words, we should try to make new technologies more controllable in their later phases by taking a deliberately experimental and incremental approach to their development.



2. An Ethical Framework for Technological Experiments

Approaching new technologies as social experiments requires both a perspectival and practical shift. We need to think about the technology in a new way and put in place practical mechanisms for ensuring effective social experimentation. The practical mechanisms will have epistemic and ethical dimensions. On the epistemic side of things, we need to ensure that we can gather useful information about the impact of technology and feed this into ongoing and future experimentation. On the ethical side of things, we need to ensure that our experiments respect certain ethical principles. It’s the ethical side of things that concerns us here.



The major strength of Van de Poel’s article is his attempt to develop a detailed set of principles for ethical technological experimentation. He does this by explicitly appealing to the medical analogy. Medical experimentation has been subject to increasing levels of ethical scrutiny. Detailed theoretical frameworks and practical guidelines have been developed to enable biomedical researchers to comply with appropriate ethical standards. The leading theoretical framework is probably Beauchamp and Childress’s Principlism. This framework is based on four key ethical principles. Any medical experimentation or intervention should abide by these principles:

Non-maleficence: Human subjects should not be harmed.

Beneficence: Human subjects should be benefited.

Autonomy: Human autonomy and agency should be respected.

Justice: The benefits and risks ought to be fairly distributed.

These four principles are general and vague. The idea is that they represent widely-shared ethical commitments and can be developed into more detailed practical guidelines for researchers. Again, one of the major strengths of Van de Poel’s article is his review of existing medical ethics guidelines (such as the Helsinki Declaration and the Common Rule) and his attempt to code each of those guidelines in terms of Beauchamp and Childress’s four ethical principles. He shows how it is possible to fit the vast majority of the specific guidelines into those four main categories. The only real exception is that some of the guidelines focus on who has responsibility for ensuring that the ethical principles are upheld. Another slight exception is that some of guidelines are explanatory in nature and do not state clear ethical requirements.



For the details of this coding exercise, I recommend reading van de Poel’s article. I don’t want to dwell on it here because, as he himself notes, these guidelines were developed with the specific vagaries of medical experimentation in mind. He’s interested in developing a framework for other technologies such as the iPhone, the Oculus Rift VR, the Microsoft HoloLens AR, self-driving cars, new energy tech and so forth. This requires some adaptation and creativity. He comes up with a list of 16 conditions for ethical technological experimentation. They are illustrated in the diagram below, which also shows exactly how they map onto Beauchamp and Childress’s principles.