Flashy tech articles in the news today would have you believe that the alarming attributes of AI is that, one day, machines might become cognizant and do away with humans. That somehow the code, written for specific tasks, will end up creating a black box which gives rise to consciousness (whatever that is), and machines, built for human specific functions, will evolve into something greater. It is not just the news articles written by someone who has no background in science that is writing like this, but also public figures, such as Stephen Hawking and Elon Musk, are also preaching the scare of sentient machines.

As a programmer and data scientist who uses machine learning models, this seems far away from the reality at hand. I’m not saying it can’t happen one day, but that one day isn’t tomorrow, or the day after.

Where We Are At Now:

We are currently able to code narrow artificial intelligence. That is, we have certain programs that are very good at specific tasks. We can create a program to look over every chess game it plays itself, pull out the optimal moves to play based on some unknown statistical significance hidden in the data, and it will do that extremely well, but that same program can’t then play Go, or recognize a picture of cat. We can write other programs to do those tasks. We can even have all those programs on the same computer running in parallel. For every task that comes easily to us humans, we have to spend years programming a computer to perfect this one skill. In the end, however, we the machine learning engineers need to write the program to do these tasks, and we need to feed the data to look at.

How then, is a computer going to decide to look at data not given to it? How is it going to decide to learn to enjoy the site of trees when it sees pictures of cats? How would we even program a machine to make its own choices, to evolve phenologically on its own, when we still are far from understanding fully how humans do it? There’s no equivalent of a biological imperative and there’s no equivalent of evolution. The construction of artificial consciousness is total fantasy at this point, devoid of the reality at hand.

What then is the reality at hand? What is the alarming side of AI?

The reality is that AI, and data science, is currently at the beginning stages of replacing millions of jobs, and perhaps more importantly, making statistical decisions that have huge impacts on the populations without clarity or ethics encoded.

Automation without Distributed Ownership:

Replacement of jobs is beneficial for companies that can afford to pay data scientist, machine learning engineers, or AI specialists, because they can give someone a salary of $150k to replace hundreds of jobs at a time. As the well paid tech workers earn job stability, and a tremendous pay, they are often writing proprietary code for a company that will quite possibly out live them. The workers’ next of kin have no stakes in what they built, only the money they take home. In other words, although the current family nucleus is deriving benefit from the parent’s hard work, once that job ceases to exist, that same security is not confirmed for future generations, because the whole point is to automate.

If most jobs get replaced by machinery and programs that can do the human’s job, who in two hundred years owns this machinery and programs? If it becomes monopolized by the elite few, and there are little jobs left for the rest of us, what happens then? Will we redistribute the wealth? Will life become luxurious for all? Will the whole global society benefit from the thousands of years of human tool evolution, hard work from generation after generation, and technological progression? Or will we the majority be cut out, cast into a poverty unknown to us now?

But a question remains: if there is no economically sound consumer, what is the use of automation for the elite? Would the elites pull on the economically desperate to harvest those last pennies, or would they forget the poverty stricken and use other elites as consumers — similar to wealthy nations and impoverished nations today?

To me, this is the dark skeleton in the closest of data science and AI. We have no plans for the future other than to make it automatized. We have agreed to no social contract, nor have we written a constitution for this progression of technology. We have patents that expire, but secrets become even hidden inside a company who may not wish to push their breakthroughs to patents. We construct machinery that is property to the company, not to the individuals who built it. We design programs that learn, and therefore, replace the programmer. Finally, we write programs that are even starting to replicate art without the artist. Who will own these programs in a century from now?

Lack of Ethical Decision Making:

The second issue, is that data scientist write code which make automated statistical decisions that have vast impacts on the population without clarity or ethics encoded. It is most pertinent to be wary of the validity and ethics of automation. Remember that the code is written by humans, the data is collected by humans, and that both can be biased and lie.

If we collected data about recidivism, we might easily find a trend that black Americans are more likely to repeat a crime, but how was our data collected and what data did we not include? Did we notice that there was a bias to jail black Americans? Did we notice that there was a trend in our history to impoverish black Americans, which may well have an impact on this data? Did we write our questionnaires to determine recidivism in a biased manner? The answers to these questions are all essential to creating an ethical and just model to predict and act on recidivism, yet they are overlooked in the decisions currently going into our models to predict recidivism.

Companies like Facebook, Google, and Twitter are using machine learning models to get you hooked without thinking of societal implications. Since, the only goals for a behemoth company are retention, growth, and revenue, they are not thinking of ethical implications in their business models. We see this over and over again with examples like, Uber’s surge alerts during a terrorist attack in Sydney, or Facebook taking advantage of at risk teens. But those are the obvious ethically corrupt algorithms. What about the ethics that are more subtle?

Data scientist might find themselves in jobs that try to predict insurances rates, hiring talent, directing advertisement towards the right population, and a host of other ethically undetermined fields. It might be easy to forget ethics, especially since many of us have not had training in it, but it is essential to ponder if your model and your collected data is going to impact a population.

Those of us in the field must ask ourselves what should be done about this, and what can be done? Of course, you want to provide for your family, to take your cut of well-deserved wealth after working so hard by specializing in our society, but at what cost does this conspiring come? Of course, you want to optimize your models to get the highest revenue for your company, but what possible negative impact have you just created for a subpopulation? I certainly don’t have the answers, and you might not as well, but you do have an obligation as a acting global citizen to think critically of these future outcomes.

So think hard, think ethically, challenge yourself, and challenge your colleagues to do the same. With that in mind, perhaps we can tilt tomorrow towards a brighter future for those to come.