Nobody wants to get near nuclear waste, so a team of Lancaster University engineers is developing a new semi-autonomous robotic system to help dismantle decommissioned reactors. Using new imaging software and a Microsoft Kinect camera, the two-armed mobile robot can identify, grasp, and cut objects like pipes without the operator having to control every movement.

The decommissioning of a nuclear reactor is a long and expensive job that requires a very high degree of skill. Unfortunately, it's also a job that has to be done in a highly radioactive environment contaminated with all levels of nuclear waste. This puts the workers in the unenviable position of having to be right on the spot and as far away as possible at the same time.

"The standard within nuclear decommissioning is for direct human-controlled remote tele-operation of robots, which is extremely difficult for the operators particularly given the complexity of nuclear decommissioning tasks," says James Taylor, Professor of Control Engineering at Lancaster University's Department of Engineering, "Fully autonomous solutions are unlikely to be deemed safe in the near future and so we have explored creating a semi-autonomous solution that sits between the two."

The prototype robot has hydraulically powered arms and manipulators with a camera providing visual information. However, the camera isn't just a simple closed-circuit device. Instead, a computer is analyzing the images, allowing the robot to identify objects and work out how to grasp, manipulate and cut them. Rather than a conventional joystick control, the operator points out the desired object on the screen and the robot handles the rest.

"By making use of a single camera mounted on the robot our system focuses on a common task in these harsh environments – the selecting and cutting of pipes," says Taylor. "Our system enables an operator to instruct the robot manipulator to perform a pipe grasp and cut action with just four mouse clicks. Tests show that operators using this system successfully outperform operators using the current joystick-based standard. It keeps the user in control of the overall robot but significantly reduces user workload and operation time."

Because the system is intended to be used by operators without the need for extensive training, the project is also working on how to equip the robot with multiple sensors for things like audio and temperature to provide the user with more feedback as well as more of a sense of place.

This includes the designing of a graphic user interface that turns a flood of raw data into meaningful readouts that can be quickly interpreted and understood. It's a bit like a car's fuel gauge, which doesn't provide an exact numerical readout of how many gallons remain in the tank, but instead turns the data into a simple dial with a needle traveling between E and F.

The team says that the robot has been tested in the laboratory with a small number of operators carrying out tasks like cutting plastic pipes. In order for the robot to be more practical, more work needs to be done, including shielding the machine against radiation damage.

The research was published in Robotics.

Source: Lancaster University