After tinkering with his brand new Xbox Kinect one weekend, University of Washington graduate student Fredrik Rydén had devised a way to use the technology for surgery.

The basic idea behind Rydén's hack was to tap into the Kinect's ability to grab raw depth data in real time and transfer it from game players' whole bodies to corporeal surgical sites, then marry the data to haptics, or sensory feedback. The system is built using Kinect, the OpenKinect drivers, Virtual Environment, and a haptic device—in this case, Sensable's Phantom Omni.

Xbox Kinect in Research

Rydén, who is advised by professors Howard Chizeck and Blake Hannaford, developed the system as part of a larger research effort at the electrical engineering department's BioRobotics Lab, where Rydén is a visiting graduate student from Sweden.

The University of Washington researchers envision using the Kinect hack to add haptics to robotic-aided surgery (usually minimally invasive, or laparoscopic, surgery), which currently relies on visual information from the tiny video cameras that are sent into the body alongside the surgical instruments. Doctors can see where their tools are, but they cannot feel anything. When an instrument is moving along and makes contact with something it shouldn't, like bone or a vital organ, the robotic controls don't automatically stop moving or provide any kind of sensation that would indicate a new surface is being touched.

Adding haptics means giving the surgeons physical resistance on their end that imitates what their instruments are touchingor shouldn't be touching. The medical team could essentially create force fields around vital organs, arteries, or anything that, if disturbed, may cause the patient harm. If the instruments get too close to these protected areas, the surgeon might feel the warning before noticing the potential problem on screen.

Cheaper Solutions Using Consumer Electronics

The research group, which is working more broadly to improve surgical robotic methods, had discussed using CT scans to define areas where the surgeon's instruments should not be before surgeryan expensive process. The team then discussed whether it was possible to use some kind of infrared sensor to automatically define the regions instead. They decided to try the Kinect, in part because it's a relatively low cost and mass produced product, and Rydén set out to write some code. "I bought the Kinect on a Friday and was done on Monday morning," Rydén said.

The Kinect camera works fine and dandy on a table and object, but how would the camera work inside the body?

"We are investigating ways to achieve the Kinect functionality but with a smaller system that is physically inside the body, ultimately to provide haptic feedback (sense of touch) to the surgeon operating the controls for robot-assisted and possibly non-robotic endoscopic surgery). Details regarding this are still in the early stages," said Chizeck and Rydén via email.

"The code that Fredrik Rydén wrote," Chizeck said, "is in what is called 'The Virtual Environment.' This part transforms and visualizes the depth data as a point cloud (small dots in a 3D environment) and also does the haptic rendering to create 'no-go'-zones. When the surgeon tries to enter one of these zones, he or she feels resistance from the haptic device. That is, it pushes back." And because Kinect works in real-time, the surgeons would be able to interact in a fluidly moving environment, a crucial fact when working inside a living, breathing human being.

Operating From Afar

Further afield, Rydén and his group are imagining more complex instances of robotics-aided surgery, where doctors operate remotely. Howard Chizeck, one of the professor connected to the project, speaking to the university's newspaper The Daily, said, "Suppose there's an earthquake somewhere. First responders could get victims to a van with a satellite dish on top and the tools inside, and a surgeon somewhere else could perform the surgery."

In a YouTube video, Rydén demonstrates his creation using the Phantom Omni by Sensable, a hardware peripheral that integrates with a variety of software for which a user would benefit from being able to feel the virtual elements. For example, if sculpting virtual clay in a 3D modeling program, the Phantom Omni creates resistance that mimics the physical properties of clay, so the user must literally and physically push the stylus on the haptics device in order to mold the clay. Rydén's video shows him sitting at a table through the eyes of the Kinect camera, and another person in a different location ready to use the Phantom Omni. The Omni user can "feel" the surface of the table, and when Rydén introduces a small object onto the table, the Omni detects that, too, and the user feels as if he is poking and prodding it.