Stanley Kubrick’s film 2001: A Space Odyssey opens with one of the most iconic scenes in film history. The scene, entitled THE DAWN OF MAN, portrays a small band of early human ancestors, ape like in appearance, living amidst a harsh arid landscape. The scene unfolds and the audience is given a brief glimpse into the lives of these early hominids as they struggle to survive the barbarous conditions of their surroundings. They cower in fear at night, spend most of the day foraging the scarce plantations for food, and one poor fellow is even torn to shreds by a leopard. The creatures appear helpless and afraid, the quality of their lives completely determined by the boundaries and limitations of their external environment.

That is until the arrival of the monolith, a mysterious black object that appears out of nowhere. No explanation is given as to what it is or where it came from, but shortly after its arrival, the apes suddenly come to the realisation that the dry hard bones scattered across the desert floor can be used as tools. This simple innovation completely changes the fate of the group, who find they can now defend themselves against predators, take down easy prey for meat, and eventually discover that these new found tools can even be used as weapons against each other. The scene ends with one ape triumphantly hurling a bone high into the air, which is immediately followed by the next scene – a shot of a giant space station orbiting the earth. The contrast is powerful, and provides a stark reminder of just how far our species has come.

The famous opening scene in 2001 depicts the era of human history referred to as the Paleolithic era, an era characterised by the discovery and development of the first and most primitive Stone Age tools some 2.5 million years ago. In many ways, these tools can be thought of as mankind’s first technologies, our first successful attempt at manipulating the natural environment around us to not only meet our survival needs, but to also add the elements of pleasure and ease to our existence. It’s hard to imagine how utterly incomprehensible the broader significance of these tools would have been to the primitive minds of our early ancestors. How could they have possibly known that their simple creations would ignite a chain reaction of technologies that would eventually give rise to the mechanical and digitised world of the 21st century? And following that, if such simple tools gave rise to the world we currently inhabit, what unfathomable worlds will emerge from the foundations of our own technological creations?

In their latest book The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, MIT economists Erik Brynjolfsson and Andrew McAfee lay out a strong case that humanity is on the brink of its next technological leap, and that we are now in the early stages of a monumental shift as transformative as that brought on by the Industrial Revolution in the late 18th century. With the rise of self-driving cars, service robots, medical diagnostic machines, and advances in Artificial Intelligence (A.I.), Brynjolfsson and McAfee argue that computers and digital advances are doing for mental power what the steam engine and its descendants did for muscle power.

If this is true, and the key building blocks are already in place for digital technologies to usher us into a new technological age, then it appears that Kubrick’s mysterious monolith is at our doorstep once again.

Of course, as Brynjolfsson and McAfee also point out in their book, the shift into the industrial revolution was accompanied by a host of challenges and dark consequences, from soot-filled skies to the horrific exploitation of child labour. The transition into a second machine will likely be accompanied with it’s own host of challenges and consequences, and the likelihood of catastrophe will continue to increase unless we begin to talk about and prepare for them now.

The purpose of this essay then is to do just that, to begin a conversation that needs to be had on a much broader scale, and to address some of the foreseeable challenges that may arise in the near future. How will machines and the digitisation of our world change the way we live our lives? Will these machines take many of our jobs, and if so, what will work look like in such a world? It would seem as the next decade unfolds, it will reveal not only some of the greatest economic and social challenges mankind has ever faced, but also, provided we traverse across those muddy waters unscathed, one of our deepest existential challenges.

What will man do, if technology solves all his economic problems, and he is deprived of his traditional purpose?

We’ll get to that.

First, let’s begin with the culprit.

PART I: Technology

Our Cynical Past

In a revision of his 1973 essay Hazards of Prophecy: The Failure of Imagination, the science fiction author Arthur C. Clarke penned these famous words:

“Any sufficiently advanced technology is indistinguishable from magic.” [1]

To our Paleolithic ancestors, the technological age we currently inhabit, with its flying machines and smart phones, might as well be one of sorcery and magic. In fact, given their lack of linguistic capabilities, even attempting to understand our world through the semantic categories of ‘sorcery’ and ‘magic’ would have been impossible. To their minds, our world is literally incomprehensible.

However, we need not go back 2.5 million years to blow ancestral minds with the wonders of our current technological landscape. The leaps and bounds we have made in the last 100 years will do just fine. The American chemist James B. Conant (1893 – 1978) put it like this:

“Somewhere between 1900 science took a totally unexpected turn. There had previously been several revolutionary theories and more than one epoch-making discovery in the history of science, but what occurred between 1900 and, say, 1930 was something different.”

To give an idea of what Conant is talking about, here is a brief list of some of the most influential discoveries made between 1903 to 1932:

1903: Wright Brothers: Aeroplane, Charles Taylor: Aeroplane Engine

1905: Albert Einstein: theory of special relativity

1906: Walther Nernst: Third law of thermodynamics

1909: Robert Andrews Millikan: determines the charge on an electron

1911: Heike Kamerlingh Onnes: Superconductivity

1912: Max von Laue : x-ray diffraction

1913: Niels Bohr: Model of the atom

1915: Albert Einstein: theory of general relativity – also David Hilbert

1915: Karl Schwarzschild: discovery of the Schwarzschild radius – identification of black holes

1924: Edwin Hubble: the discovery that the Milky Way is just one of many galaxies

1925: Erwin Schrödinger: Schrödinger equation (Quantum mechanics)

1927: Georges Lemaître: Theory of the Big Bang

1929: Edwin Hubble: Hubble’s law of the expanding universe

1932: James Chadwick: Discovery of the neutron

From these discoveries came inventions such as the radio, x-ray machines, refrigeration, television, antibiotics, rocketry, the automobile, the aeroplane and nuclear power. The modern world as we know it would not exist without these technologies, and yet prior to the 20th century, none of them existed.

One of the most interesting things to note about the rapid scientific and technological progress of the early 20th century, is that even some of the most brilliant scientific minds of its day struggled to keep up, which often led to some startling bad predictions. Take for example, the words of Lee de Forest, the American radio pioneer writing about the future of television in 1926: “While theoretically and technically television may be feasible, commercially and financially I consider it an impossibility, a development of which we need waste little time dreaming. [2]

Perhaps some of the most well known forms of cynicism occurred in the fields of aero and astronautics. At the beginning of the 20th century, scientists were almost unanimous in declaring that heavier than air flight was impossible, and that anyone who attempted to build aeroplanes were fools. The American astronomer, Simon Newcomb, wrote a well received essay that concluded:

“The demonstration that no possible combination of known substances can be united in a practical machine by which man shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration of any physical fact to be. “ [3]

This thread of technological cynicism can be seen strung throughout the following decades as well. Through the 1930’s and 40’s, eminent scientists mocked the pioneers of astronautics who were trying to figure out how to send objects to the moon. Here are the words of Professor A.W. Bickerton, writing in 1926:

“This foolish idea of shooting at the moon is an example of the absurd length to which vicious specialisation will carry scientists working in thought-tight compartments… The proposition appears to be basically impossible.”

The key lesson to learn from our cynical past is that an individual can be highly intelligent and competent in any given field, and still be completely wrong about the future. Especially when they start with the preconception that what is being investigated is impossible.

In response to anyone who might think the new technologies of the future are impossible, or something that should only be a concern for later generations, they need only look to our past for their answer, and to realise the truth of the old adage never say never.

The Exponential Explosion

Glancing over the thin slice of time from 1900 onwards, it becomes quite clear that most of what we understand about the natural world stems from the last 100 years of human enquiry. This is a remarkable fact, given that 100 years on a cosmic chart of human existence is barely a blink of an eye. What is going on here? How have we managed to squeeze so much technological and scientific progress into such a short amount of cosmic time?

Gordon Moore, cofounder of Intel, is best known for a prediction he made in a 1965 article titled “Cramming More Components onto Integrated Circuits.” Moore’s most famous excerpt from this essay reads:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year… Certainly over the short term this rate can be expected to continue, if not increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain constant for at least ten years.”

This is the original statement of Moore’s Law, the observation that is famous for giving us insight into the wonders of exponential growth, and the prediction that technological power doubles approximately every two years.

Moore’s law has held up astonishingly well over the last four decades, and has been true for areas outside of integrated circuitry as well. Understanding the exponential growth of technology can also help us understand why prominent scientist of the past, living well before Moore, failed to take into account just how fast technology can grow.

To give a modern example of exponential growth at work, consider the fact that today the smartphone in your pocket has more computing power than all of NASA back in 1969, when it placed two astronauts on the moon.

Moore’s observation continues to hold true. Our technology is getting faster and cheaper, and is showing no signs of slowing down.

It is only through applying the lessons we haved learned from our cynical past, and through understanding the power of exponential growth, that we can ever hope to have any chance of preparing ourselves for the future.

With these in mind, let us now cast our gaze forward, to some of the technologies that will lay the foundation for the second machine age.

What’s Around The Corner?

Self-driving Cars

In 2010, Google announced that the self driving cars it had begun developing back in 2005 had been driving successfully, in traffic, on American roads and highways for the past 5 years [4]. By 2012, Google had grown a few prototypes into a small fleet of vehicles that had collectively logged hundreds of thousands of miles with no human involvement and with only two accidents. Of those two accidents, one occurred when a person was driving the autonomous car, and the other happened when a Google car was rear ended – by a human driver.

Self-driving cars went from being the stuff of science fiction to a reality in a few short years. Cutting edge research explaining why they were not coming anytime soon was outpaced, yet again, by rapid unaccounted for developments in both science and engineering.

How soon can we expect these self driving cars to be on our roads? In an interview with Forbes magazine in May 2015, Mark Fields, the CEO of Ford, estimated that fully autonomous vehicles might be available on the market within roughly 5 years. [5]

Communication

It was often thought complex communication tasks, like listening and talking to customers, was a task solely reserved for human beings. A 2004 review of the previous half centuries research in automatic speech recognition opened with the admission that “Human-level speech recognition has proved to be an elusive goal.” [6]

Less than a decade later major elements of that goal have been reached. Apple and other companies have made robust natural language processing technology available to hundreds of millions of people via their mobile phones. “Siri” has been assisting Apple users for a few years now, and the technology continues to grow. [7]

In a TIME article titled “Meet the Robot Telemarketer Who Denies She’s A Robot”, journalist Zeke Miller describes an eerie experience he had with a robot telemarketer selling health insurance. When he asked if he was talking to a robot (which he clearly was), the robot laughed and replied “No I am a real person. Maybe we have a bad connection?” The rest of the conversation is slightly stilted and the telemarketer robot would clearly fail any kind of stringent turing test, but the progress of the technology over the past few years is nothing short of incredible. [8]

Cleaning Services

Cleaning robots, such the Neato XV signature pro, are now completely self-sufficient. In addition to building internal maps of surrounding areas, they also know when to return to their charging ports, and can carry out pre-set scheduled cleaning routines all of their own accord. Other robots, such as the Dyson 360 eye robot, can be controlled remotely via a phone app.

Retail

Many will have also noticed the gradual takeover of self-checkout machines in many groceries stores such as Woolworths and Coles, and even discount retailers like Kmart and Big W. Michael Chui, a partner at the McKinsey Global Institute, describes where this trend might take us next:

“Through a population of sensor technologies placed strategically within stores, retailers will recognize customers when they walk in the door through smart devices or other means. Stores will have payment cards on file; customers will be billed when they leave the store with the merchandise, essentially bypassing the checkout.” [9]

Virtual Reality

The emergence of Virtual Reality, a very sci-fi concept if there ever was one, has exploded. The Rift, a virtual reality head mounted display developed by Oculus VR, is set to be available on commercial markets sometime in early 2016. Through immersing it’s wearer in a totally three-dimensional world, the Rift takes gaming to a completely different level. For anyone that has had the privilege of trying one of the early prototypes, they will likely tell you it is one of the most incredible and surreal experiences a human being can undertake. The implications for this technology stretch well beyond the realm of video games of course, and has shown potential in a number of different fields including psychology, property development, and many others. [10]

Artificial Intelligence

All of these are incredible advances in technology, and they will no doubt shape our world in ways we still can’t imagine. But what’s even more incredible, is that in terms of life changing technologies, these are all merely warm up acts compared to what will take centre stage in the years to come. Arguable the most important is the emergence of a A.I.

To take just one example. IBM is applying the same innovations that allowed their supercomputer ‘Watson’ (the machine that destroyed the best human players on the game show jeopardy in 2011 [11], to help doctors better diagnose what’s wrong with their patients. The supercomputer is being trained to sit on top of all the world’s high-quality published medical information; match it against patient’s symptoms, medical histories, and test results; and formulate both a diagnosis and a treatment plan. The huge amounts of information involved in modern medicine make this type of advance critically important. IBM estimates that it would take a human doctor 160 hours of reading each and every week just to keep of with the relevant new literature.

The Bigger Picture

Due to exponential growth, technology continues to spread both wider and faster than ever before, and trying to keep up with it all can be overwhelming to say the least. In order to put all of this progress into some kind of context, we need take a breather and ask ourselves a fundamental question – what is the ultimate point of technology?

Our ancestors used primitive technologies to gain access to new food sources, increase shelter building efficiency, and to defend themselves against predators. Technology gave our ancestors the ability to go beyond the limitations and boundaries of their external environment and enter into new uncharted territories never before imagined. No longer were they merely pawns in the chess game of nature, they became players, with the ability to make more defining choices and exert ever greater powers of influence on the world around them.

In this sense, the point of technology has been, and continues to be, liberation – the unshackling of human beings from the servitude of nature, so that they may continue to reach beyond their limits.

The same spirit of technology, the mysterious monolith if you will, remains with us today, driving us ever forward into worlds of unfathomable magic (as Clarke defined it). This is why we are building cars that drive for us, robots that clean for us, and machines that think for us – all so that the human species may continue to step into new doorways of unthinkable possibilities.

Of course, this is all in principal. It’s always important to remind ourselves that the same spirit of technology that brought us out of the savannah also gave us the power of nuclear weapons. Without the psychological, emotional, and global maturity to use our technology for good, the possibility that we will ultimately destroy ourselves is certainly on the table.

Despite the wonderful and dangerous possibilities of the future, the continual development of our technology does raise some confronting question for us to deal with in the present.

If technology and automation continue to do most of the work for us, what will be left for us?

PART II: The Future of Work & The Impending Existential Crisis

A Jobless World

Writing in the 1930’s at the beginning of the Great Depression, the British economist John Maynard Keynes, despite the pervasive pessimism of his contemporaries, predicted a time in the future when man would face unprecedented prosperity and abundance. Showing further powers of foresight, he quickly identified a likely conundrum. He phrased the problem like this:

“Thus for the first time since his creation man will be faced with his real, his permanent problem – how to use his freedom from pressing economic cares, how to occupy leisure, which science and compound interest will have won for him, to live wisely and agreeably well… Yet there is no country and no people, I think, who can look forward to the age of leisure and of abundance without dread. For we have been trained too long to strive and not to enjoy it.”

For some, the prospect of a jobless world sounds wonderful, a kind of utopian existence that encapsulates man’s final triumph over nature and our constant battle for subsistence. It taps into one of mankind’s oldest fantasies, to someday have all our material needs fulfilled minus all the drudgery. Others find this concept laughable, detestable even. What would we do with all the free time and who would pay for everything?

It was Voltaire who once said, “Work saves us from three great evils: boredom, vice and need.” and there is plenty of psychological research to support him on that point. However, work is a very loosely defined term, and is certainly not constrained to only those tasks for which a person receives monetary rewards. In a world in which our economic concerns have been taken care of by technology, work may still exist, however it would likely take on an entirely different meaning.

In many ways, our feelings about a jobless society are irrelevant. The rising tide of technology is bringing this scenario closer to our shore whether we want it here or not. Many of the technological advances coming in the next few years trespass overtly into territory currently occupied by human workers. Without a proper plan and an economic strategy in place to deal with these changes, things could get very ugly – fast.

Take for example, the area of transportation. As it stands, the transport industry is one of the largest workforces we have in society. In Australia, the transport and postal industry employs around 600,000 Australians and accounts for 5.1% of national employment [12]. In the US, this number is around 3 million, extrapolated worldwide this figure stands at about 70 million. What will happen to all the taxi drivers, bus drivers, couriers, and truck drivers when automobiles can drive themselves faster, safer, for longer periods of time without rest, and require no incentives to work?

It’s not just the transport industry that is in trouble. The Committee for Economic Development of Australia (CEDA), is an independent organisation that focuses on what jobs and skills Australians will need to develop to ensure our economic growth in the future. In June 2015, they released a major research report concluding there is a high probability that 40 per cent of the Australian workforce, which equates to more than five million people, could be replaced by automation within the next 10 to 20 years. [13]

While new innovations and industries will likely absorb some of these workers in the short term, it seems unlikely that in the long run we will be able to create new jobs fast enough for everyone displaced by automation. Especially when one takes into account the unfathomable speed of exponential growth, the digitisation of many of our goods and products, and the fact that fewer and fewer people are needed for new businesses. Not to mention the cost and time it takes to retrain an individual for a new career path.

So, what are our options? How can we best encourage technological growth while ensuring that as few people as possible are left behind?

The Short Term

Both CEDA and MIT economists McAfee and Brynjolfsson stress that in the short term, our focus should be on shaping economic reforms to incentivise innovation and ideation.

In our modern world, innovation isn’t merely a luxury, it is an imperative. In the modern economy of reduced transportation costs, global markets, and infinitely reproducible products, knowledge – in particular, newly created knowledge – is the singular route to prosperity. In the modern ‘knowledge economy’, those who don’t innovate are condemned to be commoditised.

With so much science-fiction technology becoming a reality, it might seem that radical steps are necessary. But this is not the case, at least not right away. While these new technologies bubble away behind the curtain of society, we still have human workers to take care of, and all the basic rules of growing a strong economy still apply.

Yes, new machine age technologies are quickly leaving the lab and entering mainstream business. But as rapid as this progress is, we still have lots of human cashiers, customer service representatives, lawyers, drivers, policemen, home health aides, managers, and other workers. Not all of these occupations are on the brink of being swallowed by the rising tide of automation.

Not yet anyway.

The Long term

McAfee and Brynjolfsson also make some suggestions for some long term strategies, while taking the time to place some sensible caveats.

History is littered with unintended and sometimes tragic side effects of well intentioned social and economic policies. It’s difficult to know in advance exactly which changes will be most disruptive, which will be implemented with unexpected ease, and how people will react. No one solution is a panacea, and all come with their own host of challenges that need to be considered.

In saying that, one of the key long term solutions is the possible revival of the Basic Income scheme. An unconditional basic income is a form of social security in which all citizens of a country receives an small unconditional sum of money (e.g., $1000/month), either from a government or some other public institution, in addition to any income received elsewhere. Basic income systems are financed on returns to publicly owned enterprises and through taxation.

For obvious economic reasons, policies like Basic Income have not been part of mainstream policy discussion. But it does have a surprisingly long history and came remarkably close to reality in 20th century America. Although it sounds very much like a socialist left wing policy, you might be surprised by the mixed bag of some of its proponents. From Thomas Paine and Bertrand Russell, to Richard Nixon and Martin Luther King Jr., who wrote in 1967, “I am now convinced that the simplest approach will prove to be the most effective – the solution to poverty is to abolish it directly by a now widely discussed measure: the guaranteed income.”

Of course, while it would act as a security blanket for those large number of workers who are displaced by automation, a Basic Income scheme would require an enormous redistribution of taxes and other economic strategies to make it work. A Basic Income also raises legitimate questions with regards to motivation and incentives. It might seem like a drastic measure, but if the situation gets desperate enough, we may have no choice.

There are countless other strategies proposed by economist and social reformers alike, many more than can be mention here, and it’s impossible to say at this stage which path we will take, as much depends on what will emerge in the coming years.

As McAfee and Brynjolfsson point out in their book, and whose words shall now be echoed, “We include them not necessarily to endorse them, but to instead spur further thinking about what kind of interventions will be effective as machines continue to race ahead.”

Further Down The Rabbit Hole – The Existential Crisis

In our current world, the social and existential value of a person is almost always determined by how much he or she can produce for a given society, whether that be in the form of goods and services or entertainment. If one is unable to produce anything of perceived value, their social status, along with their own sense of self-worth and purpose, appear to drop dramatically. [14]

Because of this, the drive to work in our society is seen as a matter of utmost importance. Most of us spend the vast majority of our waking life pursuing career paths that satisfy us and give us a good footing in our social environment. As a collective, we place enormous value on the virtues of productivity and hard work. Those that work hard and contribute are held up as ideal role models for the rest of society. Those that work little or not at all are often viewed as leeches, contributing nothing and deserving of our deepest contempt.

While this current societal value system remains with us today, it will likely come under serious threat as machines begin to occupy more and more of the current job market. This threat to the current value system brings with it a number of existential challenges. If an individual has been conditioned for most of their life to view the worth of others and themselves solely through this lens of contribution and productivity, what will become of them when they come face to face with a future in which most of the producing has already been taken care of? How will they treat those who find, through no fault of their own, that their chosen career path has been swallowed by automation, and that they can no longer contribute in the traditional sense? How will they view themselves, given that for so long they have wrapped their own sense of self-worth and purpose so tightly around the concepts of work and their own capacity to contribute? To pour all of our existential value into the basket of work, as we have done for so long, seems dangerous, and can be likened to building a house on the foundations of sand. It is dangerous because when the winds of change come a-blowin, that house is going to crumble into oblivion.

It may be that our greatest challenge in the distant future will not be the adaptation of new economic and social policies to meet our changing world, but the development of new value systems, ones that seek to understand human purpose and existential value in more universal and concrete terms, absent of the contributing potential of any of its citizens. This change in our value system will have enormous consequences and re-shape society in ways we can’t yet imagine. How will purpose be defined in a world absent of work? How will human beings occupy their time? Which systems of living will we value, and which will we condemn?

There are many questions and no absolute answers. However, Keynes does provides us with a possible glimpse of those who might lead the way if we do end up in an age of complete technological subsistence:

“Of course there will still be many people with intense, unsatisfied purposiveness who will blindly pursue wealth – unless they can find some plausible substitute. But the rest of us will no longer be under any obligation to applaud and encourage them… We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enjoyment in things, the lilies of the field who toil not, neither do they spin.”

It is possible, that once liberated from the toils of work by our technology, humans will go on to explore other facets of the human experience that have currently been pushed aside. Perhaps we will see a resurgence in spirituality, a new passionate and stronger search for deeper meaning and purpose in the cosmos. In addition to the continuation of exploring the outer worlds of space, perhaps we will also use our new found leisure to begin a mass exploration of the inner worlds of the psyche, combining new and unforeseen technologies with methods from our deep history, including techniques such as meditation and the use of select hallucinogens such as ayahuasca and psilocybin mushrooms, substances that were once used by our ancestors for psychological healing and have been shown to produce extraordinary effects on human consciousness.

John Adams, a US diplomat and politician living through most of the 18th century, had a vision for humanity that culminated in the pursuit of more creative and artistic avenues, all of which he viewed as higher forms of learning and being. He wrote:

“I must study politics and war that my sons may have liberty to study mathematics and philosophy. My sons ought to study mathematics and philosophy, geography, natural history, naval architecture, navigation, commerce, and agriculture, in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry, and porcelain.”

Perhaps our creative endeavours will be the final iceberg upon which we will rest our sense of purpose and existential hopes; a last remaining vessel floating across an enormous dark sea of long forgotten pursuits and abandon past times, all of which now lie buried under the weight of history.

Of course, what happens to that iceberg when machines learn to create themselves?

We are of course heading very far down the rabbit hole at this point, and there comes a point where it simply becomes impossible to know what lies around the corner.

While we can’t know for certain what the future will hold, we can safely say the trajectory is set for a world of strange and wonderful possibilities.

Some Final Thoughts

Our technology at this point in time can be likened to a runaway train, hurtling humanity along a track at ever increasing speeds. The conditions ahead of the track are shrouded in mist, and no one truly knows if we are heading for a technological age of wonder or complete disaster. Only one thing seems certain – change is inexorable, resistance is futile. Stopping technology at this point is neither foreseeable nor desirable.

We are not helpless however, and as always, much of the future will depend on the choices we make today.

And there is hope in this – for man, as much as he is a creature of intense habit and inertia, is also a creature of incredible adaptability. Our talent for moulding ourselves to new environments and our knack for problem solving is possibly the most defining characteristic of our species.

And it is precisely this trait that we will need in the coming decades.