A specially designed motion planning processor developed by researchers at Duke University will allow robot arms to plan their movements up to 10,000 times faster than previous technologies.

Unlike humans, robots generally find it hard to adapt to environments where new movements are required. This means that picking up an object in a situation that has not been pre-engineered may require several seconds of computation. However, the Duke team claims its new processor is fast enough to plan and operate in real time, and power-efficient enough to be used in large-scale manufacturing environments.

“When you think about a car assembly line, the entire environment is carefully controlled so that the robots can blindly repeat the same movements over and over again,” said George Konidaris, assistant professor of computer science and electrical and computer engineering at Duke.

“The car parts are in exactly the same place every time, and the robots are contained within cages so that humans don’t wander past. But if your robot is using motion planning in real time and a part is in a different place, or there’s some unexpected clutter, or a human walks by, it’ll do the right thing.”

Although motion planning has been studied for 30 years, existing approaches still mostly rely on general purpose CPUs to crunch the numbers. The most time-consuming aspect of the calculations is collision avoidance, so the researchers built a processor specifically to tackle this challenge, which performs thousands of collision checks in parallel.

It works by breaking down the arm’s operating space into thousands of 3D volumes called voxels. The algorithm then determines whether or not an object is present in one of the voxels contained within pre-programmed motion paths. Thanks to the specially designed hardware, the technology can check thousands of motion paths simultaneously, and then stitch together the shortest motion path possible using the “safe” options remaining.

“The state of the art prior to our work used high-performance, commodity graphics processors that consume 200 to 300 watts,” said Konidaris. “And even then, it was taking hundreds of milliseconds, or even as much as a second, to find a plan. We’re at less than a millisecond, and less than 10 watts. Even if we weren’t faster, the power savings alone will add up in factories with thousands, or even millions, of robots.”