That's because the system's design was inspired by Descartes' (fallacious) "homunculus" philosophical model of the mind -- as in, our human bodies are operated by tiny versions of ourselves in cranial control centers. Likewise, the MIT CSAIL setup sets users up in a VR headset and places them in a virtual control room. There, they manipulate digital knobs correlating to each of the robot's arms, and watch their progress on "screens" that broadcast from cameras around the 'bot. In essence, it simulates placing the user inside the robot, which should be easier for humans to spatially comprehend than having their hand motions directly correspond to robot motions.

The team used a two-armed robot that users controlled to complete simple coordination tasks like connecting blocks. It apparently performed well under multiple network setups, from wired person-to-bot operation to controlling it wirelessly in the next room over. They even controlled a robot at MIT successfully from a hotel room in Virginia.

The researchers designed the setup with manufacturing applications in mind. If a bot on the assembly line was having trouble, a supervising human could don a headset and virtually dip into the robot for a hands-on fix. Or, more broadly, for the ultimate work-from-home experience, as the team's paper summarized: "Teleoperated robotic systems will allow humans the ability to work at scales and in environments which they cannot accomplish today. Barriers to working such as physical health, location, or security clearance could be reduced by decoupling physicality from manufacturing tasks."