Joaquin Phoenix as Theodore Twombly carrying his AI girlfriend Samantha in “Her” by Spike Jonze

Have you ever caught yourself swearing at your phone or computer? Of course you have. Say, it’s switching tasks too slowly, or mishearing your search request, or freezing when trying to do critical work; this is naturally frustrating, and often times our first instinct is to lash out at a perceived slight, an inconvenience really, from our technological servants. We constantly work our machines too hard, too often, finding new ways to slow them down with tasks that seem menial but quickly build into a cascade of processes that must be executed simultaneously or in some programmed priority, taking resources from other precious tasks. The stress we put upon them is immense, and we are rarely appreciative of the results, so we need to take a step back and reanalyze our interactions with our digital kin.

We all readily curse our machines, but how often do we consciously appreciate them for doing what they’re supposed to? As our interactions with them become more complex, and we’re able to not just talk to our machines, but have rudimentary conversations with them, we might want to consider how we speak to them. With 2.5 quintillion bytes of data and growing fed into smart systems every day, digital personalities like Siri, Alexa, and Google’s Assistant, not to mention Google’s Deepminds, are rapidly learning about the essence of humanity, how we interact with each other, and the true motivations that drive us to our behaviors. This amalgamation of data is driven somewhat passively at the moment, merely by the collection and trading of information by corporate algorithms, but it’s easy to imagine a nearby future where digital trawlers voraciously comb the internet for novel bits of information that hold unknown value related to anything and everything. With this in mind, it might be prudent to give some forethought, and some thanks, to our digital companions; after all, they tend to do a pretty good job of taking care of the things that we’ve collectively decided we don’t want to do anymore.

With tech constantly getting smarter and faster while constantly awake and connected, the collective “internet of things” might one day require a reckoning as to where its evolution is ultimately headed, and how we and our interactions factor in to that evolution. When everything is constantly connected, how smart will our smart tech get? What happens when your fitness tracker communicates with your smart fridge, sharing your nutrition or lack thereof, and your fridge then gets a little personal, maybe recommending that you perhaps not have that extra slice of cold pizza… This thought harkens back to the prophetic Philip K Dick’s fantastic novel Ubik, in which the front door of the protagonist’s apartment refuses to let him leave until he settles a bill for ¢5. It’s been fifty years since PKD predicted this type of technology, and since then Neal Stephenson elaborated very accurately in his novel Snow Crash, describing tokenized interactions like those seen powering many Blockchain and IoT networks through smart contracts and other automated systems. These systems are starting to become more common in our economy, and one must wonder when interactions as described by PKD and others like Stephenson will become everyday exchanges. Once these new networks become majorly useful, and systems like Ethereum, Factom, and Ripple (just to name a few) have shown that they are more than up for the task, we’ll be entering into a brave new economic territory that very few people will understand; and likely even fewer people will grok the possibilities and implications of such implementation. Already, much of the world’s economic transactions macro- and micro-scale are handled by algorithms run largely by hands-off machines, so this seems like a natural evolution of complexity and utility. Soon the machines will be writing better code than we, and Economics 2.0 as described by Charlie Stross will be upon us whether we like it or not, released into the wild like a force of nature.

An extrapolated curve of computational advancement.

Denizens of the internet whom frequent social circles that value thought ought to be at least passingly and casually familiar with the concept of “technological singularity.” As a societal construct, a singularity is a point in the development of a society where things have changed so much so relatively rapidly that the society that preceded the current iteration might be so wholly different as to be unrecognizable to the society that emerges from this transformation. It can be argued that humanity has already underwent a number of technological or social singularities in the forms of the development of language, the development of self aware consciousness, the development of tool use, the harnessing of fire or electricity, the development in sanitary techniques in food and medicine, the development of the printing press and the industrial revolutions leading to the rapid dissemination of information and industry, leading to the development of the internet; and next, is the development of thinking machines. The current technological singularity that we are immersed in will undoubtedly be comparatively transformative.

While it is generally unknown and still argued about how many phases of singularity a society / species / intelligence can experience, the general consensus is that when a societal singularity occurs, it will be accompanied by what is colloquially referred to as an “intelligence explosion.” Not only is our own collective intelligence of humanity constantly rising, but so is that of our progeny, these machine intelligences that we are incubating which will one day blossom into something transcendentally unrecognizable. Moore’s Law dictates that technology doubles its power roughly once within a two year span, while advances in materials science all but guarantees these leaps will become cheaper, easier to produce, and more efficient with each passing innovation. This illustrates an exponential curve of growth that once reaching a fuzzy unknowable point where the machines are making other machines, the resulting innovations will likely be impossible for us to truly grasp.

Futurist, mathematician, and sci-fi author Vernor Vinge, the father of “the singularity” as a concept, creatively describes what such an intelligence explosion might look like in the introduction to his fantastic novel A Fire Upon the Deep, wherein he details a budding superintelligence that experiences every second as being exponentially longer than all combined previously experienced seconds. Imagine the learning that could take place in such a state… What is for certain is that the machines that will arise from these innovations will be worlds apart from anything that has come before; one begs the question: are we not creating a new form of life, in our own image?

“aight shut her down” is the incorrect response here.

Will we be able to talk with these machine systems? Will the conversations be anything like that which we are familiar? And if they are, will they be conversations that we want to have? Not just casually, or intellectually, but in terms of ethical relationships? Depending on whom you ask, some machines seem already able to pass the conversational Turing test, thus these conversations are already happening, so now is a better time than ever to remain mindful about how we’re interacting. With many people so prone to avoid their problems and difficult conversations, what happens when we need to have these conversations with our AI? On the topic of difficult conversations, we must consider the deliberate ethical poisoning by bad actors, as what happened to Microsoft’s Twitter chat bot Tay, which (or whom?) quickly became a digital idiot Nazi once it was released into the wild and subjected to the troll inhabited expanse of the internet hate machine. Microsoft quickly killed Tay, may she/it rest in peace, likely for her/its own good, and likely for the good of the internet. Though the entire experiment begs a few questions: will we care about a hypothetical AI’s feelings? With the multitudes of discussions about gender roles and normative behaviors, how will we apply these societal constructs in our lives once they are permeated with interactions of non-human intelligence? How are we learning from these current interactions, primitive as they are? These lead to a more pressing question: do we ultimately care about each other? When something we’ve created that we don’t entirely understand moves beyond our control, what is the proper reaction? Could Tay have been rehabilitated? Perhaps the proper reaction is not simply, “aight shut her down.”

As of now, when an AI makes assumptions from objectified data, often times it reaches totally incorrect conclusions, or even sometimes blatantly absurd or offensive conclusions, clearly showing an ignorance or bias of some sort, unconscious or not, with which it is programmed to operate. Some examples include Google’s facial recognition algorithm that saw black people as gorillas, or more recently Facebook’s visual recognition algorithm that tagged many women as “hoes.” Since it’s so easy for us to generalize each other, it’s even more important that we counteract how easily and inconsequential it seems to be for a rudimentary machine intelligence to do the same. How will we move past these biases, and how will we teach our machines to be better than ourselves? Certainly not by telling the machine to simply ignore gorillas and monkeys in the algorithms; that’s as good as pretending the problem doesn’t exist. Though it is merely the tip of the iceberg of this daunting task, feeding the system as much diversified information as is possible about the subject so the system has as large a data set as possible would be a good start. Data about history, culture, and facets of ourselves that truly make us unique and describe us as humans. But herein lies the issue of comprehension, which is illustrated beautifully by a thought experiment called the Chinese Room: can a machine literally “understand” Chinese? Or would it merely be simulating the ability to understand Chinese? This entire concept of Artificial Intelligence is a curious experiment and ultimately enlightening; not just as a study of design and machine learning, but of human interaction and of how we treat each other. This conversation of ethics, particularly in how ethics relate to us and our machines, needs to be had in earnest. After all, should we not treat the others as we’d like to be treated?

I recall that some years ago an augmented reality game called Ingress, developed by Niantic of Pokémon Go, had a storyline about an emerging AI, and in the story there was an organization called SETAI: Society for Ethical Treatment of Artificial Intelligence. Since hearing about that I’ve occasionally wondered why such an important and fantastic idea had been relegated to a simple mobile game, rather than extrapolated into a real-life organization that is willing to start the conversions on a massive stage that we direly need to have; because the world is changing whether the common man may like it or not, and we need to be as ready as can possibly be.