Two futurists clash on what 2030 will bring. Ray Kurzweil anticipates godlike capabilities at the point of singularity when human intelligence is surpassed by artificial intelligence (AI). In contrast, Bill Joy anticipates danger.

Sixteen years ago, I read with great interest and growing unease, Bill Joy’s essay in Wired, titled Why the Future Doesn’t Need Us. Joy, co-founder of Sun Microsystems and today, a venture capitalist at Kleiner Perkins Caufield & Byers questioned the wisdom, the future and the ethics involved in genetics, nanotechnology and robotics (GNR) advances.

Within the essay, Joy describes how he learns from futurist Ray Kurzweil, today Director of Technology at Google, how rapidly GNR technologies will lead to singularity, Joy seriously questions the wisdom of continuing down that path. While he admits that each hold the potential for great promise for medical cures and treatments, extending the life span of humans substantially and also the quality of those lives, Joy cautions, “Yet, with each of these technologies, a sequence of small, individually sensible advances leads to an accumulation of great power and, concomitantly, great danger.”

There’s sufficient evidence that, as humans, we do err on the side of too little attention given in advance of our meddling. Take for example, the fact that the influenza virus of 1918 killed an alarming 50 million people; where is the rationale behind the U.S. Department of Health and Human Services’ decision to publish the complete genome of the virus on the Internet? Not only does the action make it very easy to replicate the virus, the potential for harm is estimated to be worse than an atomic bomb. Need more examples of human meddling? Invasive species we’ve unleashed such as the cane toad, altered insects, the long-term health effects of pesticide use, the running controversy regarding the safety of genetically engineered crops and the consistent hacking of our most important data via devices that are just plain insecure.

Ray Kurzweil’s View

According to Ray Kurzweil when speaking at a conference in 2015, by the 2030s the neocortex of our brains will be directly connected to the cloud. He anticipates that nanobots in our brains will render us Godlike, but that our brains will not become obsolete. While that sounds good, how do we know that will be true? As we pass each milestone in technological advances, do we ever reverse course to undo the advances as they may be too dangerous?

Kurzweil also states that once our brains are hooked up to computers, we will be funnier, sexier and more loving. He states that tiny robots from DNA strands will extend our brains into not just artificial-to-us intelligence, but emotional intelligence as well. Forgive me, but I believe I’ve worked in this industry too long. If engineers are responsible for furthering human intelligence to include the emotional realm, I’m unconvinced that most of them can pull off sexy, funny or more loving.

What is true is that there are sufficient “hooks” that are pulling at us. Curing diseases, living longer than the 80 some years that most of us will last. Maybe we should ask what kind of lives we might have?

Kurzweil readily admits that the emergence of AI as the norm will not alleviate the conflicts that exist. Humans will be more than they are today with expanded intellectual weaponry. He states, however, that the best way to combat this fact is to “…work on our democracy, liberty and respect for each other.” In today’s chaotic global society, how can this be relied upon?

He also admits that jobs that exist now will be going away as robots do our work for us, yet he’s convinced that not only will there be new, albeit yet unidentified new jobs, those jobs will move us up Maslow’s hierarchy so that we have time to do things that give us personal gratification and have a high standard of living for everyone. While he looks to entrepreneurs and college students, that represents only a portion of society—and what about the rest?

Convinced that we will eventually become used to the idea and comfortable with sharing the world with artificial intelligence, Kurzweil claims in his 2012 release of How to Create a Mind, “As you get to the late 2030s or 2040s, our thinking will be predominately non-biological and the non-biological part will ultimately be so intelligent and have such vast capacity it will be able to model, simulate and understand fully the biological part. We will be able to fully back up our brains.”

Bill Joy’s Stance

A meeting with Kurzweil and Kurzweil’s assertion that the time was accelerating when humans would become robots or morph with them, unleashed substantial unease in Bill Joy. Joy’s resulting Wired article states Moore’s Law (computer processing speeds double every 18 months) would continue in 2030, unleashing ultra-powerful computing based on molecule-sized processors. It was the potential of self-replication and independence that the “nanobots” could attain that, for him, represented the greatest risk.

Joy questioned not only the job front—whether or not there would be sufficient numbers of jobs and the skills to do them, but also political systems globally that would be able to handle Kurzweil’s robotic dreams. Joy stated in his essay, “…in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of the technology that may replace our species. How do I feel about this? Very uncomfortable.”

Joy explained that as we hand over our power to the machines voluntarily, they could do little else than take and expand that power as humans lose the ability to intellectually keep up. At that point, it will be impossible not to further enable robotic decisions in favor of ours. We can see evidence of this today as driverless cars that are less prone to accidents take the wheel from humans, as well as in a variety of medical situations. In these cases, it seems attractive. In others, however, it’s downright scary.

What happens in 2030 when a few are “godlike” in their intelligence and in their ability to live well past normal life spans? Will a handful of the elite rule? Will the masses be reduced to “sheep”? Will decision ensue to eliminate further resource drain on the planet by limiting reproduction in the masses? Are you concerned?

Can it happen?

Yes. How would we maintain control of a super-intelligent robot with the ability to clone itself? We’re attracted to the faster, simpler, easier way of handing off decisions, improving on results even marginally, and trusting as a species that we’re bright enough to see what’s coming that would be filed under the heading Murphy. We just aren’t.

Today, Kurzweil still is chasing Godlike, and Joy believes we might be able to steer technology in the right direction. However, if in just 14 more years, we do merge with a robot and singularity is a reality, how can we trust that this could be a long-term positive experience? Isn’t it true we still can’t keep our personal private information out of the hands of hackers in 2016?

As 2030 nears, Bill Joy says it best: “Perhaps it is always hard to see the bigger impact while you are in the vortex of a change. Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know -- that is the nature of science's quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own.”

Joy asks, “Given the incredible power of these new technologies, shouldn't we be asking how we can best coexist with them? And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn't we proceed with great caution?”

What do you think?