Before you rule out so-called killer robots, consider their applications in counter-terrorism.


The prospect of intelligence triumphing over ignorance is always encouraging.

As a secular method to understand and explain the world, science can reveal gaps and holes in religious dogma, and by doing so, challenge extremist religious beliefs that do not hold up to observable experiments. As the world becomes increasingly networked (thanks in part to science), access to scientific knowledge may disrupt the very belief systems that are exploited and manipulated to recruit and motivate terrorists.

But it could take years if not decades before science as a knowledge system infiltrates past the authoritarian walls of religious fundamentalism.

In a more practical sense, applied science, and particularly artificial intelligence, may provide more immediate tactical benefits.

Enter killer robots, artificially intelligent lethal machines capable of selecting and engaging targets without human intervention.

Enjoying this article? Click here to subscribe for full access. Just $5 a month.

Killer robots have received considerable bad press in recent months. Many scientists, nongovernmental organizations, and states have called for a preemptive ban on their development and eventual use on the battlefield. They fear that lethal autonomous robots may increase the likelihood of war and could one day pose an existential threat to humankind.

This anxiety is nothing new. In an article published in 1863 entitled “Darwin among the machines,” English writer Samuel Butler argued that the “the machines will hold the real supremacy over the world and its inhabitants.” The author recommended that as a precaution, mankind should return to the “primeval condition of the race.”

Technophobic–or perhaps neophobic, the aversion to all things new–sentiments have ebbed and flowed throughout history, peaking at times when revolutionary technologies were introduced in society. Seventeenth century Japan rejected the use of firearms, then an “advanced military technology.” Nineteenth century England grappled with the Luddites, who smashed mechanical looms for fear of putting people out of work.


In many ways, artificial intelligence is different because it raises unique issues about control, legitimacy, and accountability. But could the pessimistic prognoses about killer robots miss the forest for the trees? What if robots were used as partners of peace and promoters of global order and justice?

Or, importantly, as terrorist hunters?

The war on ISIS shows no end in sight. The group’s unrestrained campaign of violence in Iraq and Syria continues to reveal new shades of brutality. Its disregard for the well-being of humanity is unrivaled in the 21st century. I say “humanity” because ISIS’ political war is costing the lives of the innocent by using tactics that are perhaps best suited to earlier iterations of the human race. Rape, beheadings, torture. Repeat.

But I could just as well say that the group has no respect for the “humanities,” the branch of human learning that studies human culture. ISIS’ iconoclastic crusade against Syrian and Iraqi cultural heritage is well documented, both by the group itself and the international community.

Diplomat Brief Weekly Newsletter N Get first-read access to major articles yet to be released, as well as links to thought-provoking commentaries and in-depth articles from our Asia-Pacific correspondents. Subscribe Newsletter

The group’s obliteration of numerous World Heritage Sites, including the recent destruction of the Baalshamin temple in Palmyra, and of priceless cultural artifacts around the region are part of a systematic campaign to enforce their puritanical interpretation of Islam. For ISIS, cultural cleansing is necessary to wipe the slate clean and build a caliphate free from idolatry.

As a commercial hub linking the Far East with the Roman Empire, the city of Palmyra marked “the crossroads of several civilizations in the ancient world.” Its destruction has been condemned as a “war crime” by UNESCO.

Irina Bokova, UNESCO’s chief, deplored these actions as the “most brutal, systematic” destruction of cultural heritage since World War II.

Enjoying this article? Click here to subscribe for full access. Just $5 a month.

These ancient sites are symbols of humanity’s cultural history: a reminder of how the web of human relations intersect and knowledge flows interact. This is what makes ISIS particularly dangerous: not only are they murdering members of communities but also destroying the cultural foundations upon which such communities were built.

Culture is not some whimsical collection of pretty paintings and table manners; it is a reflection and embodiment of social identity. Through language, art, music, knowledge, and religion we continuously give meaning to our social existence. By destroying these cultural anchors, ISIS is now on an ideological path that would make Hitler and Malan throw evil nods of approval.

Earlier this year, Dario Franceschini, Italian Minister of Cultural Heritage and Tourism, called for the formation of a UN military force to protect the world’s cultural heritage. Killer robots would be particularly useful against groups like ISIS, where political costs are too high for major military powers to put boots on the ground, and political momentum too low to justify human military intervention to protect sites of cultural importance.


In addition to using robots offensively to fight terrorists, robots could be used to promote peaceful objectives, such as protecting humanitarian convoys, refugee camps, schools, hospitals, and museums.

First iterations will likely be semi-autonomous, featuring some level of human supervision and control. Once the technology is sufficiently capable of meeting the stringent standards of international humanitarian law, such as discriminating between combatants and civilians, as well as operational safety, such as recognizing friendly fire, greater autonomy may be delegated to the robot.

ISIS may be defeated before killer robots ever see the light of day. But the value of autonomous lethal technology, operating within legal and morally sound grounds, cannot be underestimated to solve global security problems. Scientific advancements–and its evolving creations, like artificial intelligence–must be given serious thought as a bridge to peace, or at the very least as a weapon to defeat terror.

Lucas Bento is an attorney in New York specializing in complex litigation and international arbitration.