From time to time, the Singularity Hub editorial team unearths a gem from the archives and wants to share it all over again. It’s usually a piece that was popular back then and we think is still relevant now. This is one of those articles. It was originally published August 10, 2010. We hope you enjoy it!

You don’t have a flying car, jetpack, or ray gun, but this is still the future. How do I know? Because we’re all surrounded by artificial intelligence. I love when friends ask me when we’ll develop smart computers…because they’re usually holding one in their hands. Your phone calls are routed with artificial intelligence.

Every time you use a search engine you’re taking advantage of data collected by ‘smart’ algorithms. When you call the bank and talk to an automated voice you are probably talking to an AI…just a very annoying one. Our world is full of these limited AI programs which we classify as “weak” or “narrow” or “applied.”

These programs are far from the sentient, love-seeking, angst-ridden artificial intelligences we see in science fiction, but that’s temporary. All these narrow AIs are like the amino acids in the primordial ooze of the Earth. The ingredients for true human-like artificial intelligence are being built every day, and it may not take long before we see the results.

How did we create the jungle of AI that surrounds us today?

Let me answer that question with someone’s else’s question. During the panel discussion for the Transcendent Man documentary about Ray Kurzweil at the Tribeca Film festival, a viewer asked the futurist if there would be another explosion of AI that leads us to the singularity. Another explosion?

Yes, you see, back in the late 80s scientists started rethinking the way they pursued AI. Rodney Brooks of MIT (also one of the founders of iRobot) took a new approach. Instead of developing AI from the top-down he looked at building things from the bottom up. Instead of artificial reasoning, he looked at artificial behavior.

The result were robots that based their actions upon basic instincts and patterns. iRobot’s Roomba doesn’t vacuum a floor with high-level reasoning about how the carpet should eventually look, it performs a bunch of different cleaning patterns until it knows the whole carpet’s dirt-free.

That’s behavior-based AI, and it’s powerful stuff.

Along with increased processing power, artificial intelligence really took off in the 90s. Using modular and hierarchical techniques like Brooks’ behavior-based approach, researchers were able to create a bunch of AIs that did things. These weren’t philosopher programs, they worked for a living. Data mining, inventory tracking and ordering, image processing — these jobs all started falling to AIs that built simple patterns into algorithms that could handle dynamic tasks.

Now that list of tasks has expanded. We’re slowly building a library of narrow AI talents that are becoming more impressive. Speech recognition and processing allow computers to convert sounds to text with greater accuracy. Google is using AI to caption millions of videos on YouTube.

Likewise, computer vision is improving so that programs like Vitamin D Video can recognize objects, classify them, and understand how they move. Narrow AI isn’t just getting better at processing its environment, it’s also understanding the difference between what a human says and what a human wants.

Programs like BlindType compensate for human input error, and next generation phone-answering services convert your requests into commands. By assigning different situations values, narrow AIs can make choices that maximize their rewards, an approach that let the ASIMO robot figure out the best way through an obstacle course.

Artificial intelligence is also getting better at analyzing large sets of data and synthesizing new data that fits the set, which we’ve seen in programs that write music or create new art.

These are the building blocks for the next explosion of AI tools.

Do you want a security guard AI? Computer vision plus interpretation of human actions? How about a program that answers your toddler’s endless questions?

Speech recognition plus interpretation of human actions plus a large database of knowledge plus creation of new datasets (we’ve already seen it work for Jeopardy!)?

Of course, things aren’t simply plug ‘n play at this point, but you can see that as each application of narrow AI is perfected it can feed into a more complex task.

There are three key factors that will enable the creation of a strong artificial intelligence that can think like a human being.

We need greater computer power to match and mimic the brain.

We need to better understand the hardware of the brain and the way it processes information.

We need to find ways that an AI can approach higher and higher levels of problem solving.

Each of these requirements is on its way to being fulfilled.

Kurzweil (among others) predicts the continued exponential growth of processor power. The Blue Brain Project (and other research) is exploring the brain and seeking to simulate its functions. I think that the growing presence of narrow AI speaks to the third need.

There are many different approaches to AI research, and not all of them are compatible, but as we develop more and more programs that can handle simple decision making I think we are building a library of problem solving that will eventually develop into a human-like hierarchical reasoning structure.

When is the first sentient computer lifeform going to arrive? I have no idea.

But the seeds of its birth are scattered through the advanced technologies we use every day. So pick up your smart phone while traveling on a moving train, call an international bank, and ask the artificial voice that answers to recite your last ten financial transactions.

You’ll be flexing the muscles of many different modern AIs, and you know that the exercise is good for the brain.

Image credit: Shutterstock