As robots become more elaborate and expansive, the requirements for them to effectively communicate and collaborate with one another increases.

This was typified by the CARLOS project that I wrote about recently. It was designed to showcase the use of automation in a semi-structured environment.

It’s the kind of environment that has traditionally stumped researchers. Most modern shipyards utilize block construction, whereby chunks of the ship are constructed in the ship yard and then joined together at the building dock.

It’s very repetitive, not to mention dangerous, and is typically done by manual labor. The environment has proven beyond automation, until now at least.

Robot navigation

Central to this task is effective navigation, and a recently published paper highlights the progress that has been made in this area.

The researchers, from the A*STAR Institute for Infocomm Research, utilized the same method of navigation as used by the human brain.

This utilizes two distinct kind of brain cell: place cells and grid cells. Place cells are activated when we are in a familiar place, whilst grid cells then provide an absolute reference to determine where we are on a map.

The brain’s method of navigation shares some similarities with how sailors navigate in uncharted waters.

“A sailor will use cues such as the stars or landmarks to determine where their ship is on a map, and then, as the ship moves, will update its location on the map by observing only speed and direction,” the researchers say.

This kind of navigation is handled by the grid cells in the brain. These provide a virtual reference point for spatial awareness. In reality, when we go past a spot that the grid cell has previously recorded, it reactivates and helps us to understand where we are relative to those previous coordinates.

With both grid and place cells in use, we’re able to accurately navigate through our environment.

Finding the way

By using computer programs to simulate the activity of place and grid sells in the human brain, the team believe they’ve replicated the kind of neural scheme used in our brains.

Fundamental to the process is the specific feedback that takes place between the ‘grid and place cells’ inside the robots CPU. A robust calibration of the visual signals is also crucial to allow internal maps to be produced within the algorithm.

The machine was put through its paces in a small 35 meter square building and it achieved a degree of success in finding its way around.

“Cognitive maps can help the robot when it is lost, because they can provide global topological information of the navigating environment to help the robot localize itself,” the researchers say.

A drone based guide

An alternative approach has been taken by a Swiss robotics company called Autonomous System Lab. Rather than rely on internal computation, they have utilized drones to provide navigation for the robot.

The drone flies over an area to map it in several dimensions. This data is then fed to the robot that can use it to calculate an effective path from A to B. The device is then capable of rather shakily charting that path.

Suffice to say, the approach still has a long way to go before it becomes effective for widespread use, but it’s a sign of how progress is being made.

Check out the video below to see the pair in action.