The Stone is a forum for contemporary philosophers and other thinkers on issues both timely and timeless.

As the debate on the morality of the United States’ use of unmanned aerial vehicles (“U.A.V.’s,” also known as drones) has intensified in recent weeks, several news and opinion articles have appeared in the media. Two, in particular, both published this month, reflect the current ethical divide on the issue. A feature article in Esquire by Tom Junod censured the “Lethal Presidency of Barack Obama” for the administration’s policy of targeted killings of suspected militants; another, “The Moral Case for Drones,” a news analysis by The Times’ Scott Shane, gathered opinions from experts that implicitly commended the administration for replacing Dresden-style strategic bombing with highly precise attacks that minimize collateral damage.

To say that we can target individuals without incurring troop casualties does not imply that we ought to.

Amid this discussion, we suggest that an allegory might be helpful to illustrate some of the many moral perils of drone use that have been overlooked. It shows that our attempts to avoid obvious ethical pitfalls of actions like firebombing may leave us vulnerable to other, more subtle, moral dangers.

While drones have become the weapons of our age, the moral dilemma that drone warfare presents is not new. In fact, it is very, very old:

Once upon a time, in a quiet corner of the Middle East, there lived a shepherd named Gyges. Despite the hardships in his life Gyges was relatively satisfied with his meager existence. Then, one day, he found a ring buried in a nearby cave.



This was no ordinary ring; it rendered its wearer invisible. With this new power, Gyges became increasingly dissatisfied with his simple life. Before long, he seduced the queen of the land and began to plot the overthrow of her husband. One evening, Gyges placed the ring on his finger, sneaked into the royal palace, and murdered the king.

In his “Republic,” Plato recounts this tale, but does not tell us the details of the murder. Still, we can rest assured that, like any violent death, it was not a pleasant affair. However, the story ends well, at least for Gyges. He marries the queen and assumes the position of king.

This story, which is as old as Western ethics itself, is meant to elicit a particular moral response from us: disgust. So why do we find Plato’s story so appalling?

Maybe it’s the way that the story replaces moral justification with practical efficiency: Gyges’ being able to commit murder without getting caught, without any real difficulty, does not mean he is justified in doing so. (Expediency is not necessarily a virtue.)

Maybe it’s the way that Gyges’ ring obscures his moral culpability: it’s difficult to blame a person you can’t see, and even harder to bring them to justice.

Maybe it’s that Gyges is successful in his plot: a wicked act not only goes unpunished, but is rewarded.

Maybe it’s the nagging sense that any kingdom based on such deception could not be a just one: what else might happen in such a kingdom under the cover of darkness?

Our disgust with Gyges could be traced to any one of these concerns, or to all of them.

Leif Parsons

One might argue that the myth of Gyges is a suitable allegory to describe the combatants who have attacked and killed American civilians and troops in the last 10 years. A shepherd from the Middle East discovers that he has the power of invisibility, the power to strike a fatal blow against a more powerful adversary, the power to do so without getting caught, the power to benefit from his deception. These, after all, are the tactics of terrorism.

But the myth of Gyges is really a story about modern counterterrorism, not terrorism.

We believe a stronger comparison can be made between the myth and the moral dangers of employing precision guided munitions and drone technologies to target suspected terrorists. What is distinctive about the tale of Gyges is the ease with which he can commit murder and get away scot-free. The technological advantage provided by the ring ends up serving as the justification of its use.

Terrorists, whatever the moral value of their deeds, may be found and punished; as humans they are subject to retribution, whether it be corporal or legal. They may lose or sacrifice their lives. They may, in fact, be killed in the middle of the night by a drone. Because remote controlled machines cannot suffer these consequences, and the humans who operate them do so at a great distance, the myth of Gyges is more a parable of modern counterterrorism than it is about terrorism.

Only recently has the use of drones begun to touch on questions of morality. Perhaps it’s because the answers to these questions appear self-evident. What could be wrong with the use of unmanned aerial vehicles? After all, they limit the cost of war, in terms of both blood and treasure. The U.S. troops who operate them can maintain safer stand-off positions in Eastern Europe or at home. And armed with precision-guided munitions, these drones are said to limit collateral damage. In 2009, Leon Panetta, who was then the director of the Central Intelligence Agency, said, U.A.V.’s are “very precise and very limited in terms of collateral damage … the only game in town in terms of confronting or trying to disrupt the al Qaeda leadership.” What could be wrong with all this?

Quite a bit, it turns out.

Return, for a minute, to the moral disgust that Gyges evokes in us. Gyges also risked very little in attacking the king. The success of his mission was almost assured, thanks to the technological advantage of his ring. Gyges could sneak past the king’s guards unscathed, so he did not need to kill anyone he did not intend on killing. These are the facts of the matter.

What we find unsettling here is the idea that these facts could be confused for moral justification. Philosophers find this confusion particularly abhorrent and guard against it with the only weapon they have: a distinction. The “fact-value distinction” holds that statements of fact should never be confused with statements of value. More strongly put, this distinction means that statements of fact do not even imply statements of value. “Can” does not imply “ought.” To say that we can target individuals without incurring troop casualties does not imply that, we ought to.

This seems so obvious. But, as Peter W. Singer noted earlier this year in The Times, when the Obama administration was asked why continued U.S. military strikes in the Middle East did not constitute a violation of the 1973 War Powers Resolution, it responded that such activities did not “involve the presence of U.S. ground troops, U.S. casualties or a serious threat thereof.” The justification of these strikes rested solely on their ease. The Ring of Gyges has the power to obscure the obvious.

This issue has all the hallmarks of what economists and philosophers call a “moral hazard” — a situation in which greater risks are taken by individuals who are able to avoid shouldering the cost associated with these risks. It thus seems wise, if not convenient, to underscore several ethical points if we are to avoid our own “Gyges moment.”

First, we might remember Marx’s comment that “the windmill gives you a society with the feudal lord; the steam engine gives you one with the industrial capitalist.” And precision guided munitions and drones give you a society with perpetual asymmetric wars.

The creation of technology is a value-laden enterprise. It creates the material conditions of culture and society and therefore its creation should be regarded as always already moral and political in nature. However, technology itself (the physical stuff of robotic warfare) is neither smart nor dumb, moral nor immoral. It can be used more or less precisely, but precision and efficiency are not inherently morally good. Imagine a very skilled dentist who painlessly removes the wrong tooth. Imagine a drone equipped with a precision guided munition that kills a completely innocent person, but spares the people who live in his or her neighborhood. The use of impressive technologies does not grant one impressive moral insight. Indeed, as Gyges demonstrates, the opposite can be the case.

Second, assassination and targeted killings have always been in the repertoires of military planners, but never in the history of warfare have they been so cheap and easy. The relatively low number of troop casualties for a military that has turned to drones means that there is relatively little domestic blowback against these wars. The United States and its allies have created the material conditions whereby these wars can carry on indefinitely. The non-combatant casualty rates in populations that are attacked by drones are slow and steady, but they add up. That the casualty rates are relatively low by historical standards — this is no Dresden — is undoubtedly a good thing, but it may allow the international media to overlook pesky little facts like the slow accretion of foreign casualties.

Related More From The Stone Read previous contributions to this series.

Third, the impressive expediency and accuracy in drone targeting may also allow policymakers and strategists to become lax in their moral decision-making about who exactly should be targeted. Consider the stark contrast between the ambiguous language used to define legitimate targets and the specific technical means a military uses to neutralize these targets. The terms “terrorist,” “enemy combatant,” and “contingent threat” are extremely vague and do very little to articulate the legitimacy of military targets. In contrast, the technical capabilities of weapon systems define and “paint” these targets with ever-greater definition. As weaponry becomes more precise, the language of warfare has become more ambiguous.

This ambiguity has, for example, altered the discourse surrounding the issue of collateral damage. There are two very different definitions of collateral damage, and these definitions affect the truth of the following statement: “Drone warfare and precision guided munitions limit collateral damage.” One definition views collateral damage as the inadvertent destruction of property and persons in a given attack. In other words, collateral damage refers to “stuff we don’t mean to blow up.” Another definition characterizes collateral damage as objects or individuals “that would not be lawful military targets in the circumstances ruling at the time.” In other words, collateral damage refers to “the good guys.” Since 1998, this is the definition that has been used. What is the difference between these definitions?

The first is a description of technical capabilities (being able to hit X while not hitting Y); the second is a normative and indeed legal judgment about who is and is not innocent (and therefore who is a legitimate target and who is not). The first is a matter of fact, the second a matter of value. There is an important difference between these statements, and they should not be confused.

Fourth, questions of combatant status should be the subject of judicial review and moral scrutiny. Instead, if these questions are asked at all, they are answered as if they were mere matters of fact, unilaterally, behind closed doors, rather than through transparent due process. That moral reasoning has become even more slippery of late, as the American government has implied that all military aged males in a strike area are legitimate targets: a “guilt by association” designation.

Finally, as the strategic repertoires of modern militaries expand to include drones and precision guided munitions, it is not at all clear that having more choices leads strategists to make better and more informed ones. In asking, “Is More Choice Better Than Less?” the philosopher Gerald Dworkin once argued that the answer is “not always.” In the words of Kierkegaard: “In possibility everything is possible. Hence in possibility one can go astray in all possible ways.”

Some might object that these guidelines set unrealistically high expectations on military strategists and policymakers. They would probably be right. But no one — except Gyges — said that being ethical was easy.

NOTE

For a broader treatment of this argument see John Kaag and Whitley Kaufman’s “Military Frameworks: Technological Know-How and the Legitimization of Warfare” in the Cambridge Review of International Affairs (2009) also Sarah Kreps and John Kaag’s “The Use of Unmanned Aerial Vehicles in Contemporary Conflict: A Legal and Ethical Analysis” in Polity (2012).



John Kaag is an assistant professor of philosophy at the University of Massachusetts, Lowell. Sarah Kreps is an assistant professor of government at Cornell University.