AI is one of the technologies that could revolutionize the world, some people call it the electricity of the twenty first century. Researchers and professionals need to be aware of the ethical and social implications this technology poses. We are responsible for making robots and AI systems that help and empower humanity.

AI and robotics are going to shape our future. Next there are 10 issues that professionals and researchers need to address in order to desing intelligent systems that help humanity.

Misinformation and Fake News

The flow of misinformation together with our natural inability of perceiving reality based on evidence (a phenomenon called confirmation bias) is a threat to having an informed democracy. Russian hackers influencing the US elections, Brexit campaign and Catalonia crisis are examples of how social media can massively spread misinformation and fake news. Recent advances in computer vision make possible to completely fake a video of President Obama. It is an open question how institutions are going to address this threat.

Job Displacement

The scientific revolution in the 18th century and the industrial revolution in the 19th marked a complete change in society. For thousands of years before it, economic growth was practically negligible. During the 19th and 20th century, the level of society development was remarkable.

In the 19th century there was a group in the UK called the Luddites, that protested against the automatization of the textile industry by destroying machinery. Since then, a recurrent fear has been that automation and technological advance will produce mass unemployment. Even though that prediction has proven to be incorrect, it is a fact that there has been a painful job displacement. PwC estimates that by 2030 around 30% of the jobs will be automatized. Under these circumstances, governments and companies should provide workers with tools to adapt to these changes, by supporting education and relocating jobs.

Privacy

The importance of privacy is all over the news lately due to the Cambridge Analytica scandal, where 87 million Facebook profiles were stolen and used to influence the US election and Brexit campaign. Privacy is a human right and should be protected against misuse.

Cibersecurity

Cibersecurity is one of the biggest concerns of governments and companies, specially banks. A robbery of $1 billion was reported in banks from Russia, Europe and China in 2015 and half a billion was stolen from the cryptocurrency exchange Coincheck. AI can help protect against these vulnerabilities, but it can be also used by hackers to find new sophisticated ways of attacking institutions.

Mistakes of AI

Last month, a woman was hit and killed overnight by an Uber self-driving car when walking across the street in the US. As any other technological system, AI systems can make mistakes. It is a common misconception that robots are infalible and infinitely precise. A common way for some professors in my old lab to say hello to their PhD students of robotics was, what have you broken?

Military Robots

There is an ongoing debate about controlling the development of military robots and banning autonomous weapons. An open letter, from 25.000 researchers and professionals of AI, asks to ban autonomous weapons without human supervision to avoid an international military AI arms race.

Algorithmic Bias

We have to work hard to avoid bias and discrimination when developing AI algorithms. An specific example was face detection using Haar Cascades, that has a lower detection rate in dark-skinned people than in light-skinned people. This happens because the algorithm is designed to find a double T pattern in a grayscale image of the person’s face, corresponding to the eyebrows, nose and mouth. This pattern is more difficult to find in a person with dark skin.

Haar Cascades are not racists, how can an algorithm be?, but many people can feel insulted. When programing these algorithms, we need to be mindful of their limitations, transparent with users by explaining how the algorithm works or use a more effective technique with dark-skinned people.

Regulation

Existing laws have not been developed with AI in mind, however, that does not mean that AI-based product and services are unregulated. As suggested by Brad Smith, Chief Legal Officer at Microsoft, "Governments must balance support for innovation with the need to ensure consumer safety by holding the makers of AI systems responsible for harm caused by unreasonable practices". Policymakers, researchers and professionals should work together to make sure that AI provides a benefit to humanity.

Superintelligence

Some tech leaders have shown concerns about the possible threats of AI, one example was Elon Musk, who claimed that AI is more risky than North Korea. These words generated a strong criticism from the scientific community.

Superintelligence is generally regarded to a state where a robot starts to recursively improve itself, reaching a point that easily surpass the most intelligent human by orders of magnitude. Some enthusiasts, like Ray Kurthweil, believe that by 2045 we will reach that state. Others, like François Chollet, believe that it is impossible.

Robot Rights

Should robots have rights? If we think of a robot as an advanced washing machine, then no. However, if robots were able to have emotions or feelings, then the answer is not that clear. One of the pioneers of AI, Marvin Minsky, believed that there is no fundamental difference between humans and machines, and that general AI won't possible unless robots have self-concious emotions.

A suggestion in the debate around robot rights is that robots should be granted the right to exist and perform their mission, but this should be linked to the duty of serving humans. There is a lot of controversy around this area. Meanwhile, in 2017, Sophia the robot was granted the citizenship of Saudi Arabia, and even Will Smith flirted with her.