Centre for Artificial Intelligence and Robots

As it prepares to open the new Centre for Artificial Intelligence and Robotics, a headquarters in The Hague which will monitor developments in artificial intelligence (AI), the United Nations Interregional Crime and Justice Research Institute (UNICRI) has explained the need for the new center with a warning that robots could destabilize the world.

AI, and robots that benefit from it, pose a range of potential threats to humans: from the standard fears of automation and the mass unemployment that follows it, to more dramatic concerns that autonomous killer robots will be deployed by those with nefarious aims — or that they will be self-directed, for that matter. It will be the task of the UNICRI Centre for Artificial Intelligence and Robotics to second-guess each possible threat.

The Guardian reports that UNICRI senior strategic adviser Irakli Beridze said that the team at The Hague will also generate ideas about how AI advances could help achieve UN targets. His point seemed to be that while there are risks associated with developments in AI that needed to be addressed, there is a bigger picture that the center will consider, as the UN’s first permanent office focused on AI.

“If societies do not adapt quickly enough, this can cause instability,” Beridze told the Dutch newspaper de Telegraaf. “One of our most important tasks is to set up a network of experts from business, knowledge institutes, civil society organizations and governments. We certainly do not want to plead for a ban or a brake on technologies. We will also explore how new technology can contribute to the sustainable development goals of the UN. For this we want to start concrete projects. We will not be a talking club.”

Getting Ready for AI

The UN isn’t alone; others who understand the industry are preparing for advancements in AI. The United States, China, and Russia are all striving to develop weapons supremacy in the realm of AI, and Israel is also developing autonomous weapons technology.

In August, over 100 leaders in AI and robotics, including Elon Musk, urged the UN to act against autonomous weapons: “Lethal autonomous weapons threaten to become the third revolution in warfare,” they wrote. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

Stephen Hawking shares Musk’s concerns about AI, warning in 2016 that it would be “either the best or the worst thing ever to happen to humanity.” And while many AI experts, including Bill Gates, do not share these concerns, or feel they are overstated, a UN center focusing on the issue is probably a good idea. In any area with quickly developing technology that is disruptive, human rights can be a concern; this is the UN’s overall focus.