You may have a basic understanding of what Artificial Intelligence, or AI, is. But are you familiar with the range of issues it raises for your fundamental rights?

Here, we provide a brief overview of the issues at stake, as well as a look at how Access Now is working to help ensure that when companies develop AI technology — and governments adopt or regulate it — your rights are protected.

What is Artificial Intelligence?

AI refers to the theory and development of computer systems that can act without explicit human instruction and can self-modify as necessary. “AI” is used broadly to refer to a wide range of technological approaches that encompass everything from so-called machine learning to the development of autonomous, connected objects to the futuristic concept of “the Singularity”, as our colleagues at Privacy International explain.

What is the impact of AI on human rights?

The development of robotics and artificial intelligence raises important societal and human rights questions that we must address to ensure that its benefits come hand-in-hand with respect for fundamental rights. Artificial intelligence technologies and robotics could enhance performance in several sectors, helping to do everything from improving the accuracy of medical diagnoses, to increasing productivity, to reducing risks in the workplace. However, the use of AI is not itself without risks.

Novels, movies, television shows, and video games give us plenty of opportunities to imagine and explore an (often dystopian) AI-dominated future. People worry about a time when robots equipped with AI surpass human intelligence and end up making damaging and irreversible changes to human civilisation. In this context, is not surprising that Elon Musk recently called for regulating the development of artificial intelligence “before it’s too late”. He believes that AI represents an “existential threat to humanity”. Whether or not that is true, however, researchers and experts agree that AI raises serious concerns. It has significant implications for issues such as our privacy, digital security, social issues such as discrimination and diversity, and even our jobs.

Privacy and data protection

To work at all, AI — in particular, machine learning — inherently relies on gathering large amounts of data, and often on the creation of new databases (so called Big Data) that are used to make assumptions about people. These practices can interfere with the fundamental rights to privacy and data protection. Our position is that governments must develop comprehensive frameworks to protect these rights. For instance, the recently adopted EU General Data Protection Regulation (GDPR) enhances users’ rights, and it includes a reference to a “right to explanation” in the law. This language is aimed at ensuring that we are informed about the logic of the algorithms used to make decisions about us. This right-to-explanation concept is intended to improve transparency and accountability for machine-assisted decision-making, but how it impacts human rights will depend on how national courts across Europe and the European judicial institutions interpret it.

Profiling and discrimination

Many people assume that AI improves on human decision-making, associating computers with logic and imagining that algorithms automatically work against human biases or limitations. In fact, since human beings develop algorithms, they can and do replicate and reinforce our biases, and increasing use of AI may only work to institutionalise discrimination while diminishing human accountability for it.

This should come as no surprise, as these issues are not new. The Guardian recently reported that in the 1980s, “an algorithm used to sift student applications at St George’s Hospital Medical School in London was found to discriminate against women and people with non-European-looking names”. Researchers have discovered bias in the algorithms for systems used for university admissions, human resources, credit ratings, banking, the child support system, the social security system, and more.

Filtering and free expression

Algorithms are already impacting our lives, in most cases without our knowledge, understanding, or control. Yet there remains a big push for using AI to address societal problems. In 2017, it seems like not a day passes without a meeting of experts to discuss how tech companies and governments are using algorithms to deal with hate speech, violent extremism, false news, child pornography, and more. To develop rights-respecting policy and ensure that AI and machine-assisted decisions do not harm human rights, it’s imperative that we put in place measures for transparency and accountability. Otherwise, neither the general general public nor decision-makers will have the necessary information to prevent harm. We are especially concerned about the haphazard, uncoordinated development of proposals for using automated technology to regulate content online, without a clear pathway for ensuring the public can evaluate and understand what is being proposed. This represents a serious risk for human rights, in particular to the freedom of expression.

Digital security

In addition, development of AI must go together with robust digital security measures. Companies around the world, from online platforms to health and financial institutions, are investing millions of euros to develop new products that use AI. To reduce risk, these companies must embrace privacy by design, working from the beginning of product development to limit data collection to what is strictly necessary, while at the same time taking a security by design approach, working likewise from the outset to prevent data breaches and limit harmful interference or exploitation of vulnerabilities.

Connectivity

Finally, any framework for deploying AI systems should ideally create incentives for investment in high-speed and reliable network connectivity. Development of AI, like other types of innovation in the digital economy, suffers when connectivity is poor, or when governments choose to interfere with or shut down networks to control the flow of information, in contravention of free expression rights. As we have argued before, explicit recognition of connectivity as a foundation for technological innovation and development is likely to have a positive impact on human rights.

Why is Access Now working on Artificial Intelligence?

Access Now has been working globally to advance the rights of users with respect to data protection, privacy, and digital security since 2009. The emergence and increasing reliance on artificial intelligence, automated decision making processes, and profiling raise some of the most challenging issues of the 21st century for human rights, ethics, accountability, transparency, and innovation.

For this reason, we see AI and human rights as a top priority in our work in the next years. Our goal is to work together with academics, civil society, and experts, both in the private and public sector, to develop sound policy recommendations for the stakeholders involved in regulating the use of and development of AI. Earlier this year, we participated in the two EU consultations on robotics and the data economy, presenting this approach and providing specific recommendations to lawmakers. Our full submissions can be found here and here.

We’ve only just scratched the surface of the issues in play. In our next post, which will define key terms for a common discussion on AI and explore some of the regulatory proposals that lawmakers are advancing. (Stay tuned — you might enjoy learning the difference between a robot and a cyborg!)

Image source: pixabay