A lone researcher recently made a remarkable discovery that may save millions of lives. She identified a chemical compound that effectively targets a key growth enzyme in Plasmodium vivax, the microscopic parasite responsible for most of the world's malaria cases. The scientist behind this new weapon against one of humanity's great biological foes didn't expect praise, a bonus check, or even so much as a hardy pat on the back for her efforts. In fact, "she" lacks the ability to expect anything.

This breakthrough came courtesy of Eve, a "robotic scientist" that resides at the University of Manchester's Automation Lab. Eve was designed to find new disease-fighting drugs faster and cheaper than her human peers. She achieves this by using advanced artificial intelligence to form original hypotheses about which compounds will murder malicious microbes (while sparing human patients) and then conducting controlled experiments on disease cultures via a pair of specialized robotic arms.

Eve is still under development, but her proven efficacy guarantees that Big Pharma will begin to "recruit" her and her automated ilk in place of comparatively measured human scientists who demand annoying things like "monetary compensation," "safe work environments," and "sleep."

If history is any guide, human pharmaceutical researchers won't disappear entirely—at least not right away. What will probably happen is that the occupation will follow the path of so many others (assembly line worker, highway toll taker, bank teller) in that the ratio of humans to non-sentient entities will tilt dramatically.

Machines outperforming humans is a tale as old as the Industrial Revolution. But as this process takes hold in the logarithmically evolving Information Age, many are beginning to question if human workers will be necessary at all.

The Brand New Thing That Is Happening

The Luddites were an occasionally violent group of 19th-century English textile workers who raged against the industrial machines that were beginning to replace human workers. The Luddites' anxieties were certainly understandable, if—as history would eventually bear out—misguided. Rather than crippling the economy, the mechanization the Luddites feared actually improved the standard of living for most Brits. New positions that took advantage of these rising technologies and the cheaper wares they produced (eventually) supplanted the jobs that were lost.

LudditeFast-forward to today and "Luddite" has become a derogatory term used to describe anyone with an irrational fear or distrust of new technology. The so-called "Luddite fallacy" has become near-dogma among economists as a way to describe and dismiss the fear that new technologies will eat up all the jobs and leave nothing in their place. So, perhaps the HR assistant who's been displaced by state-of-the-art applicant tracking software or the cashier who got the boot in exchange for a self-checkout kiosk can take solace in the fact that the bomb that just blew up in their lives was just clearing the way for a new higher-skill job in their future. And why shouldn't that be the case? This technology-employment paradigm has been validated by the past 200 or so years of history.

Yet some economists have openly pondered if the Luddite fallacy might have an expiration date. The concept only holds true when workers are able to retrain for jobs in other parts of the economy that are still in need of human labor. So, in theory, there could very well come a time when technology becomes so pervasive and evolves so quickly that human workers are no longer able to adapt fast enough.

One of the earliest predictions of this personless workforce came courtesy of an English economist who famously observed (PDF), "We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labor outrunning the pace at which we can find new uses for labor."

That economist was John Maynard Keynes, and the excerpt was from his 1930 essay "Economic Possibilities for our Grandchildren." Well, here we are some 85 years later (and had Keynes had any grandchildren they'd be well into retirement by now, if not moved on to that great job market in the sky), and the "disease" he spoke of never materialized. It might be tempting to say that Keynes's prediction was flat-out wrong, but there is reason to believe that he was just really early.

Fears of technological unemployment have ebbed and flowed through the decades, but recent trends are spurring renewed debate as to whether we may—in the not-crazy-distant future—be innovating ourselves toward unprecedented economic upheaval. This past September in New York City, there was even a World Summit on Technological Unemployment that featured economic heavies like Robert Reich (Secretary of Labor during the Clinton administration), Larry Summers (Secretary of the Treasury, also under Clinton), and Nobel Prize–winning economist Joseph Stiglitz.

So why might 2016 be so much more precarious than 1930? Today, particularly disruptive technologies like artificial intelligence, robotics, 3D printing, and nanotechnology are not only steadily advancing, but the data clearly shows that their rate of advancement is increasing (the most famous example of which being Moore's Law's near-flawless record of describing how computer processors grow exponentially brawnier with each generation). Furthermore, as the technologies develop independently, they will hasten the development of other segments (for example, artificial intelligence might program 3D printers to create the next generation of robots, which in turn will build even better 3D printers). It's what futurist and inventor Ray Kurzweil has described as the Law of Accelerating Returns: Everything is getting faster—faster.

The evolution of recorded music illustrates this point. It's transformed dramatically over the past century, but the majority of that change has occurred in just the past two decades. Analog discs were the most important medium for more than 60 years before they were supplanted by CDs and cassettes in the 1980s, only to be taken over two decades later by MP3s, which are now rapidly being replaced by streaming audio. This is the type of acceleration that permeates modernity.

"I believe we're reaching an inflection point," explains software entrepreneur and author of the book Rise of the Robots, Martin Ford (read the full interview here)."Specifically in the way that machines—algorithms—are starting to pick up cognitive tasks. In a limited sense, they're starting to think like people. It's not like in agriculture, where machines were just displacing muscle power for mechanical activities. They're starting to encroach on that fundamental capability that sets us apart as a species—the ability to think. The second thing [that is different than the Industrial Revolution] is that information technology is so ubiquitous. It's going to invade the entire economy, every employment sector. So there isn't really a safe haven for workers. It's really going to impact across the board. I think it's going to make virtually every industry less labor-intensive. "

To what extent this fundamental shift will take place—and on what timescale—is still very much up for debate. Even if there isn't the mass economic cataclysm some fear, many of today's workers are completely unprepared for a world in which it's not only the steel-driving John Henrys who find that machines can do their job better (and for far cheaper), but the Michael Scotts and Don Drapers, too. A white-collar job and a college degree no longer offer any protection from automation.

If I Only Had a Brain

There is one technology in particular that stands out as a disruption super-tsunami in waiting. Machine learning is a subfield of AI that makes it possible for computers to perform complex tasks for which they weren't specifically programmed—indeed, for which they couldn't be programmed—by enabling them to both gather information and utilize it in useful ways.

Machine learning is how Pandora knows what songs you'll enjoy before you do. It's how Siri and other virtual assistants are able to adapt to the peculiarities of your voice commands. It even rules over global finances (high-frequency trading algorithms now account for more than three-quarters of all stock trades; one venture capital firm, Deep Knowledge Ventures, has gone so far as to appoint an algorithm to its board of directors).

Self-Driving Car

Another notable example—and one that will itself displace thousands, if not millions, of human jobs—is the software used in self-driving cars. We may think of driving as a task involving a simple set of decisions (stop at a red light, make two lefts and a right to get to Bob's house, don't run over anybody), but the realities of the road demand that drivers make lots of decisions—far more than could ever be accounted for in a single program. It would be difficult to write code that could handle, say, the wordless negotiation between two drivers who simultaneously arrive at a four-way-stop intersection, let alone the proper reaction to a family of deer galloping into heavy traffic. But machines are able to observe human behavior and use that data to approximate a proper response to a novel situation.

"People tried just imputing all the rules of the road, but that doesn't work," explains Pedro Domingos, professor of computer science at the University of Washington and author of The Master Algorithm. "Most of what you need to know about driving are things that we take for granted, like looking at the curve in a road you've never seen before and turning the wheel accordingly. To us, this is just instinctive, but it's difficult to teach a computer to do that. But [one] can learn by observing how people drive. A self-driving car is just a robot controlled by a bunch of algorithms with the accumulated experience of all the cars it has observed driving before—and that's what makes up for a lack of common sense."

Mass adoption of self-driving cars is still many years away, but by all accounts they are quite capable at what they do right now (though Google's autonomous car apparently still has trouble discerning the difference between a deer and a plastic bag blowing in the wind). That's truly amazing when you look at what computers were able to achieve only a decade ago. With the prospect of accelerating evolution, we can only imagine what tasks they will be able to take on in another 10 years.

Is There a There There?

No one disagrees that technology will continue to achieve once-unthinkable feats, but the idea that mass technical unemployment is an inevitable result of these advancements remains controversial. Many economists maintain an unshakable faith in The Market and its ability to provide jobs regardless of what robots and other assorted futuristic machines happen to be zooming around. There is, however, one part of the economy where technology has, beyond the shadow of any doubt, pushed humanity aside: manufacturing.

Between 1975 and 2011, manufacturing output in the U.S. more than doubled (and that's despite NAFTA and the rise of globalization), while the number of (human) workers employed in manufacturing positions decreased by 31 percent. This dehumanizing of manufacturing isn't just a trend in America—or even rich Western nations—it's a global phenomenon. It found its way into China, too, where manufacturing output increased by 70 percent between 1996 and 2008 even as manufacturing employment declined by 25 percent over the same period.

There's a general consensus among economists that our species' decreasing relevance in manufacturing is directly attributable to technology's ability to make more stuff with fewer people. And what business wouldn't trade an expensive, lunch-break-addicted human workforce for a fleet of never-call-out-sick machines? (Answer: all the ones driven into extinction by the businesses that did.)

The $64 trillion question is whether this trend will be replicated in the services sector that more than two-thirds of U.S. employees now call their occupational home. And if it does, where will all those human workers move on to next?

"There's no doubt that automation is already having an effect on the labor market," says James Pethokoukis, a fellow with the libertarian-leaning American Enterprise Institute. "There's been a lot of growth at high-end jobs, but we've lost a lot of middle-skill jobs—the kind where you can create a step-by-step description of what those jobs are, like bank tellers or secretaries or front-office people."

It may be tempting to discount fears about technological unemployment when we see corporate profits routinely hitting record highs. Even the unemployment rate in the U.S. has fallen back to pre-economic-train-crash levels. But we should keep in mind that participation in the labor market remains mired at the lowest levels seen in four decades. There are numerous contributing factors here (not the least of which is the retiring baby boomers), but some of it is surely due to people so discouraged with their prospects in today's job market that they simply "peace out" altogether.

Another important plot development to consider is that even among those with jobs, the fruits of this increased productivity are not shared equally. Between 1973 and 2013, average U.S. worker productivity in all sectors increased an astounding 74.4 percent, while hourly compensation only increased 9.2 percent. It's hard not to conclude that human workers are simply less valuable than they once were.

So What Now, Humans?

Let's embark on a thought experiment and assume that technological unemployment is absolutely happening and its destructive effects are seeping into every employment nook and economic cranny. (To reiterate: This is far from a consensus viewpoint.) How should society prepare? Perhaps we can find a way forward by looking to our past.

Nearly two centuries ago, as the nation entered the Industrial Revolution, it also engaged in a parallel revolution in education known as the Common School Movement. In response to the economic upheavals of the day, society began to promote the radical concept that all children should have access to a basic education regardless of their family's wealth (or lack thereof). Perhaps most important, students in these new "common schools" were taught standardized skills and adherence to routine, which helped them go on to become capable factory workers.

"This time around we have the digital revolution, but we haven't had a parallel revolution in our education system," says economist and Education Evolution founder Lauren Paer. "There's a big rift between the modern economy and our education system. Students are being prepared for jobs in the wrong century. Adaptability will probably be the most valuable skill we can learn. We need to promote awareness of a landscape that is going to change quickly."

In addition to helping students learn to adapt—in other words, learn to learn—Paer encourages schools to place more emphasis on cultivating the soft skills in which "humans have a natural competitive advantage over machines," she says. "Things like asking questions, planning, creative problem solving, and empathy—those skills are very important for sales, it's very important for marketing, not to mention in areas that are already exploding, like eldercare."

One source of occupational hope lies in the fact that even as technology removes humanity from many positions, it can also help us retrain for new roles. Thanks to the Internet, there are certainly more ways to access information than ever before. Furthermore (if not somewhat ironically), advancing technologies can open new opportunities by lowering the bar to positions that previously required years of training; people without medical degrees might be able to handle preliminary emergency room diagnoses with the aid of an AI-enabled device, for example.

So, perhaps we shouldn't view these bots and bytes as interlopers out to take our jobs, but rather as tools that can help us do our jobs better. In fact, we may not have any other course of action—barring a global Amish-style rejection of progress, increasingly capable and sci-fabulous technologies are going to come online. That's a given; the workers who learn to embrace them will fare best.

"There will be a lot of jobs that won't disappear, but they will change because of machine learning," says Domingos. "I think what everyone needs to do is look at how they can take advantage of these technologies. Here's an analogy: A human can't win a race against a horse, but if you ride a horse, you'll go a lot further. We all know that Deep Blue beat Kasparov and then computers became the best chess players in the world—but that's actually not correct. The current world champions are what we call 'centaurs,' that's a team of a human and a computer. A human and a computer actually complement each other very well. And, as it turns out, human-computer teams beat all solely human or solely computer competitors. I think this is a good example of what's going to happen in a lot of areas."

Technologies such as machine learning can indeed help humans—at least those with the technical know-how—excel. Take the example of Cory Albertson, a "professional" fantasy sports better who has earned millions from daily gaming sites using hand-crafted algorithms to stake an advantage over human competitors whose strategies are often based on little more than what they gleaned from last night's SportsCenter. Also, consider the previously mentioned stock-trading algorithms that have enabled financial players to amass fortunes on the market. In the case of these so-called "algo-trading" scenarios, the algorithms do all the heavy lifting and rapid trading, but carbon-based humans are still in the background implementing the investment strategies.

Of course, even with the most robust educational reform and distributed technical expertise, accelerating change will probably push a substantial portion of the workforce to the sidelines. There are only so many people who will be able to use coding magic to their benefit. And that type of disparity can only turn out badly.

One possible solution many economists have proposed is some form of universal basic income (UBI), aka just giving people money. As you might expect, this concept has the backing of many on the political left, but it's also had notable supporters on the right (libertarian economic rock star Friedrich Hayek famously endorsed the concept). Still, many in the U.S. are positively allergic to anything with even the faintest aroma of "socialism."

"It's really not socialism—quite the opposite," comments Ford, who supports the idea of a UBI at some point down the road to counter the inability of large swaths of society to earn a living the way they do today. "Socialism is about having the government take over the economy, owning the means of production, and—most importantly—allocating resources…. And that's actually the opposite of a guaranteed income. The idea is that you give people enough money to survive on and then they go out and participate in the market just as they would if they were getting that money from a job. It's actually a free market alternative to a safety net."

The exact shape of a Homo sapiens safety net depends on whom you ask. Paer endorses a guaranteed jobs program, possibly in conjunction with some form of UBI, while "the conservative version would be through something like a negative income tax," according to Pethokoukis. "If you're making $15 per hour and we as a society think you should be making $20 per hour, then we would close the gap. We would cut you a check for $5 per hour."

In addition to maintaining workers' livelihoods, the very nature of work might need to be re-evaluated. Alphabet CEO Larry Page has suggested the implementation of a four-day workweek in order to allow more people to find employment. This type of shift isn't so pie-in-the-sky when you consider that, in the late 19th century, the average American worker logged nearly 75 hours per week, but the workweek evolved in response to new political, economic, and technological forces. There's no real reason that another shift of this magnitude couldn't (or wouldn't) happen again.

If policies like these seem completely unattainable in America's current gridlock-choked political atmosphere, that's because they most certainly are. If mass technological unemployment does begin to manifest itself as some anticipate, however, it will bring about a radical new economic reality that would demand a radical new political response.

Toward the Star Trek Economy

Nobody knows what the future holds. But that doesn't mean it isn't fun to play the "what if" game. What if no one can find a job? What if everything comes under control of a few trillionaires and their robot armies? And, most interesting of all: What if we're asking the wrong questions altogether?

What if, after a tumultuous transition period, the economy evolves beyond anything we would recognize today? If technology continues on its current trajectory, it inevitably leads to a world of abundance. In this new civilization 2.0, machines will conceivably be able to answer just about any question and make just about everything available. So, what does that mean for us lowly humans?

"I think we're heading towards a world where people will be able to spend their time doing what they enjoy doing, rather than what they need to be doing," Planetary Ventures CEO, X-Prize cofounder, and devoted techno-optimist Peter Diamandis told me when I interviewed him last year. "There was a Gallup Poll that said something like 70 percent of people in the United States don't enjoy their job—they work to put food on the table and get health insurance to survive. So, what happens when technology can do all that work for us and allow us to actually do what we enjoy with our time?"

It's easy to imagine a not-so-distant future where automation takes over all the dangerous and boring jobs that humans do now only because they have to. Surely there are drudging elements of your workday that you wouldn't mind outsourcing to a machine so you could spend more time with the parts of your job that you do care about.

One glass-half-full vision could look something like the galaxy portrayed in Star Trek: The Next Generation, where abundant food replicators and a post-money economy replaced the need to do... well, anything. Anyone in Starfleet could have chosen to spend all their time playing 24th-century video games without the fear of starvation or homelessness, but they decided a better use of their time would be spent exploring the unknown. Captain Picard and the crew of the USS Enterprise didn't work because they feared what would happen if they didn't—they worked because they wanted to.

Nothing is inevitable, of course. A thousand things could divert us from this path. But if we ever do reach a post-scarcity world, then humanity will be compelled to undergo a radical reevaluation of its values. And maybe that's not the worst thing that could happen to us.

Perhaps we shouldn't fear the idea that all the jobs are disappearing. Perhaps we should celebrate the hope that nobody will have to work again.

Further Reading