At Robotiq we are constantly searching the Internet for the latest news in robotics.Recently we found what looks potentially to be Google's new robot. It's not a secret robot. In fact, Google has been buying robots companies like crazy to build up its internal knowledge. They also have labs dedicated to robotics, so this isn’t really a big surprise to see a 'what seems to be' a collaborative robot from Google. However, the way they use it and the different experiments that are run by the robot are quite impressive. Let's find out what's up with the Google robot.

The Robot Cell

The robots are used for machine learning applications and they seem to be pretty evolved. At first sight they seem to be market ready. However, it does not sound like the video and article was published for this purpose. In fact, it seems like the purpose is to demonstrate the machine learning application, perhaps. Let's take a look at the robot cell.

Robot: This seems to be a Google robot prototype. We heard that Google was developing a 7-axis robot. Everything seems to confirm that the robot will be used for general purpose applications, such as kitting (like the app from for the Amazon Picking Challenge).

This seems to be a Google robot prototype. We heard that Google was developing a 7-axis robot. Everything seems to confirm that the robot will be used for general purpose applications, such as kitting (like the app from for the Amazon Picking Challenge). Gripper: The Gripper is a parallel 2 finger gripper from Weiss Robotics. From what I know the fingers fitted on it are prototypes also. They seem to be very versatile and allow for encompassing and parallel grasping. The impressive part of this gripper is its ability to grasp really thin objects that lay on a flat surface, scissors for example.

The Gripper is a parallel 2 finger gripper from Weiss Robotics. From what I know the fingers fitted on it are prototypes also. They seem to be very versatile and allow for encompassing and parallel grasping. The impressive part of this gripper is its ability to grasp really thin objects that lay on a flat surface, scissors for example. Vision: A camera is set on the robot cell to observe the scene. The camera is used to locate and identify the objects. However, it seems like the camera is also providing other feedback to the robot. In fact, when the robot is manipulating an object, it instantly knows where it is and can move its end effector to the right spot to grab the object correctly.

A camera is set on the robot cell to observe the scene. The camera is used to locate and identify the objects. However, it seems like the camera is also providing other feedback to the robot. In fact, when the robot is manipulating an object, it instantly knows where it is and can move its end effector to the right spot to grab the object correctly. Sensor: The video leads me to believe that the robot is using an in-house version of a force torque sensor. This type of technology is widely used for applications like pick and place, since it can identify the type of object. However, it can also provide feedback about whether the object is grasped correctly. We know this from experience with our very own FT-300 Force Torque Sensor.

What is it Used for?

The array of robots is used to collect grasping data. In fact, if you want to teach a robot how to grasp something you have to show them how to do it. But if 13 others robot colleagues are doing the exact same thing, you can put the data together and this gives more feedback to the robot about how it is grasping objects. As cited in this article from Google Research, the array of robots have made “800,000 grasp attempts, which is equivalent to about 3000 robot-hours of practice, we can see the beginnings of the intelligent reactive behaviors. The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group. All of these behaviors emerged naturally from learning, rather than being programmed into the system.”

The very instructive article also explains the difference between open loop vision control and closed loop vision control. The grasping failure rate is dramatically lower using a closed loop vision control. This can make us think about the importance of having sensors in the control loop to analyze if the objects have been grasped or not, for example force torque, tactile or vision sensors.

What Does it Mean?

Well Google is certainly uniquely placed to master robotic grasping through machine learning. In fact, I don't know of any lab that is doing these kinds of experiments on such a large scale to improve robot ‘intelligence’. As Google has shown us with their driverless cars, they will be doing a lot of experiments to make sure they cover every possibility. Since they have the resources, they will no doubt be bringing robotic grasping intelligence and versatility to the next level.

In my own opinion I don't see Google commercializing this robot within the next year. They certainly haven’t made any announcements to this effect. However, when they are ready, they will have a LOT of machine experience under their belt. They will probably push the robotic world a little closer to the future we imagine in the robotic world. It is quite exciting to see a company like Google get involved in robotics and to see the level of dexterity the robots and grippers are capable of.

So will the Google robot be part of our next version of the Collaborative Robot Comparative eBook or will it stay in the research lab for a while yet? Who knows? Maybe you can find some useful advice from our latest documents on How to Shop for a Collaborative Robot? and see what your cobot needs are for your specific applications.