Who Are The Lawyers Who Understand AI Algorithms?

Artificial Intelligence needs more lawyers who can work with technologists.

@designer4u unsplash.com

There’s been a lot of negative press on AI algorithms lately. Everyone has a different opinion when it comes to inherent bias in Artificial Intelligence systems that are designed to help us make decisions. It’s easy to say, “Oh no, that Artificial Intelligence algorithm is racist, sexist, or even ageist.” It’s easy to point the fingers and fire accusations against our machine counterparts.

An article published by the New Scientist identified 5 biases inherent in existing AI Systems can potentially impact people’s lives in a real way. The most popular scandal is the one exposing COMPAS, an algorithm designed in the US to guide sentencing for predicting the likelihood of criminal reoffending. By ProPublica analysis, black defendants pose a higher risk of recidivism.

But, we, humans are the creators of these Algorithms. Algorithms are not designed to be biased. Most often, it’s the usage of algorithms that creates bias. The data that the algorithms that it trains on also provides the bias. There are no “perfect fit” in most situations, especially in social situations.

So, when you are trying to fit a round peg into an oval hole, there will be biases. There will be inadequacies of current AI capabilities in AI-enabled intelligent systems.

In this type of environment, what do we need? We need understanding.

When politicians are trying to come up with rules and regulations, what we need are a handful of lawyers and investigators to work with the technologists to understand the entire landscape of innovation. They don’t need to understand the minute intricacies but they need enough of an understanding to process the main issues and the bigger pictures.

Image by Author

Lawyers need an understanding like the product managers of the AI-enabled software. Often, these software are impacting people’s lives. The Chinese Implementation of the Social Credit System is an example of how much impact AI systems can have on people’s lives. It changes the structures of a society, alters people’s lives, and re-organize markets for businesses. Due to the impact of innovation, in the western world, lawyers have to learn to deal with complexities beyond contracts and words that describe the algorithms.

They have to truly grasp the basic concepts of what the algorithms are trying to accomplish: their original intention, their true intention, and their effects.

It’s not unlike examining an individual for conformity with the law. AI-enabled systems will become individual creatures. Unlike individuals, they are not yet capable of higher thinking. They are currently designed for specific purposes and to perform certain functions.

Within the scope of that, the intention of the systems comes from the organizations that develop, deploy and use the systems. When third-party vendors of AI algorithms are added to this picture, the picture becomes unusually complex. Multiple organizations from people who develop the algorithms, to people who develop the software/applications that use the algorithms, to businesses that deploy and use the software for business purposes. If the business is a governmental agency, the picture becomes even more complex.

Image by Author

Behind the math, the patterns, there are goals that the algorithms are designed to accomplish. These goals can get lost in the pile of innovations inside software that attempt to apply the algorithms. As business procedures on top of software come into the picture, at each step of the way, there can be opportunities where intentions can be altered.

Innovations are also happening faster than ever before. Most of the time, technologists who are developing new algorithms are tweaking as you go for business needs. The use-cases that arise from business needs may require intentions that deviate significantly from the original intention.

This is when there are a lot of grey areas. In these grey areas, the law has to shake hands with innovation. They have to march side by side to keep up with one another.

The law has to be (in a way applied and enforced) with a certain agility. Agility and flexibility are not what most people will think of describing law. But, at this time, in the infancy of age of AI, law is inadequate to account for the grey areas that are discovered.

Not having rules, regulations and law and order will lead to chaos. Having some is better than none at all. But, these rules and regulations have to be careful to not impede the innovation of technology.

This is why getting to the gist of it all requires the cooperation of technologists and lawyers. It’s not an us against them attitude here. The corporations are not trying to take over the world. The technology companies are not trying to claim dominance over data.

We are all in this uncertain world together. Your data is in the same pile as my data. When securities are breached, we all have the same level of risk. When privacy laws are violated, we all experience the same level of impact.

The point is really how do we make this work together?

In the western world, our entire system of rule of law is being tested as we catch up to the speed of innovation. As conflicts arise, they are all opportunities to refine regulation and implement new ones.

Projects such as Google’s AI for Social Good, and conferences such as AINow Symposium that attempt to bring together technologists, companies, social scientists, governments, lawyers, and regulators are just examples of how the Age of AI need cooperation.

The media and media organizations might do a really good job at identifying issues, raising them to gain awareness, and influence people to think about issues, but it takes the cooperation of larger organizations that can bring together technologists, companies, and governments to attempt to address these deeper issues that arise.

The repercussion of not addressing issues deeply, collectively within industries, with governments, and with cooperation from technologists is that the problems will be amplified unnecessarily when it impacts collective groups of people.

When problems are amplified, the grey areas will seem much more vast than they are.

Today’s lawyers have their jobs cut out for them in the age of Artificial Intelligence. As AI rules and regulations are refined, lawyers will step in to take center stage to help to make sense of these grey areas. They are central to the cooperation of businesses, technologists, and government. They are not only in a position to mediate conflicts but they are also in a position to suggest changes to the existing rule of law.

Future of Privacy Forum just released The Privacy Expert’s Guide to AI and Machine Learning. It’s a good place to start for lawyers to learn about AI and machine learning.

Hopefully, many of these lawyers will learn about the intricacies of AI technology so that they can make sense of the grey areas where a balance will be needed between innovation and regulation.