Image caption Robotic technology, helped by innovations like Microsoft's Kinect, are enabling machines to "see" and adapt to their surroundings

Over the past decade, astute tech watchers may have noticed two new waves of robots intersecting our lives.

The first is the quickest growing segment of the vacuum cleaning market: robotic vacuum cleaners.

Second is the newest weapon group in our militaries - air drones, ground robots for dealing with explosive devices, and underwater robots to map out what is going on in our oceans.

But there has also been a less obvious set of academic robot research going on - one that will impact many aspects of business.

It is research which began over 20 years ago, in 1990, with a concept named Simultaneous Localization and Mapping (Slam).

As small mobile robots were starting to be built by research robot makers, academics around the world began working on robots that could build maps from visual, sonar, laser range and other data sources.

Since the robots were mobile and didn't know exactly where they were, the challenge was to simultaneously figure out the relative positions and orientations of a robot as it made different observations about its situation.

This would be an easy task for a robot if it already had an accurate map to work with - but at the same time it was making decisions it was also having to build a picture of its surroundings.

Hence the word "simultaneous" in the name.

Today, Slam algorithms are exceptionally good, and the sensors that can be used to collect data are now very low-cost.

The computation needed to run the algorithms now fits in tiny embedded processors, which are more powerful computationally than the entire mainframes we used a generation ago.

Self-driving

These Slam algorithms are at the core of Google's self-driving cars.

The techniques they rely on are also at the core of the capabilities being introduced into high-end cars.

In the next few years this technology will allow for automatic lane following and changing, automatic driving in stop-go traffic, and even automobile navigation systems which will become pervasive over the next decade.

Image caption Rodney Brooks says all our vehicles are slowly becoming robots, not just machines

In short, our cars are becoming robots.

But more rapidly than that so are our agricultural tractors, our construction vehicles, our mining vehicles, and even our warehousing and manufacturing plant vehicles. All types of vehicles, becoming roboticised.

And now the next wave of robots in ordinary life are being enabled by the incredible success of a computer gaming system, the Microsoft Kinect, a three-dimensional vision system designed primarily for gaming.

The Kinect is seen as a significant breakthrough, particularly given its cost.

While academics have been working on computer vision for over 50 years, they have not been able to produce systems that see the world with anything like the capabilities of the human eye.

The fundamental issue here is another sort of "simultaneous" problem.

When humans see the world, they see reflected light from objects that they wish to recognise, but that light comes with brightness, colour and distribution that is simultaneously unknown.

The human vision system miraculously (and for a computer vision researcher, it does seem miraculous) figures out the structure of the light.

It can recognise shadows and how they affect colours - and the identity and three-dimensional location of objects, including objects that have never been seen before.

Next wave

The Kinect solves a simpler problem.

It projects infrared light - which is invisible to humans - with a known intensity and distribution pattern, and from the reflection builds a three-dimensional reconstruction of what is in the world.

It encompasses special software that can match a moving three-dimensional pattern to a generic human body so that the system can "see" people and what they are doing with their arms and head.

Image caption Microsoft's Kinect uses technology to interact with people

This technology is simply revolutionary for robots.

It allows them to be aware of people nearby, and at the gross level, to be aware of what those people are doing.

So now researchers all over the world are starting to have their robots interact with people in new and interesting ways.

This is where the next wave of robots is starting to come from.

Their exact applications are not yet known, but I will be very surprised if we do not start seeing new robots in health care and care for the elderly, and have them move into service industries in general.

At my own company, we're building a new sort of industrial robot that ordinary factory workers can train to do simple tasks.

We're going to see more and more robots, not just robot vacuum cleaners and robot cars, in our everyday lives.

Rodney Brooks is the founder of Rethink Robotics, the company behind Baxter the factory robot. He is a former professor of robotics at MIT, and author of Flesh and Machines: How Robots Will Change Us.