Centre for Innovation in Information Visualization and Data Driven Design (CIVDDD) is an Ontario Research Fund-Research Excellence grant, under which there is a project called Data Acquisition, Analysis, and Visualization for 3D Computer Vision Surgical Robotics. This project, led by Richard Wildes, is concerned with advancing computer vision and data visualization for image-guided robotic surgery. One of the specific problems includes the performance characterization of camera-based computer vision algorithms, used to estimate 3D geometry and motions of subjects during medical surgeries. These characterizations include high-resolution reconstruction and tracking of the surgical area. Advancement in this project and area itself is a major hurdle in development of surgical robots, that could become even more autonomous in the near future (let’s hope that Skynet does not become self-aware during the surgery). So far, this technology is of greatest use for human surgeons, but for their robotic assistants in simpler tasks, by making 3D information about the current state of the surgery available in real-time.



Richard Wildes, an associate professor from the Department of Electrical Engineering and Computer Science and Centre for Vision Research at the York University, has started a collaboration with the Hospital for Sick Children in Toronto, as well as the Memorial Sloan-Kettering Cancer Center in New York. The mentioned project he’s working on develops machine learning and computer vision systems to help surgeons and robots assistants during medical operations and procedures. This kind of project uses the video camera input to the computer system, which enables an enhanced 3D visualization of the medical operation as it unfolds. Usually, pre-surgical plans depend on the so-called pre-op imagery, which includes various MRI scans of internal organs and similar techniques. However, a problem could arise during the surgery itself, and sometimes the live situation changes from the pre-op one. The 3D information automatically registers the pre-op imagery and makes an in-progress correspondence with the current status of the surgery, to provide greater guidance to the surgeon. A possible use of this 3D data is to use it on a higher level to provide more autonomy to the robot surgeon system. This 3D-calibrated medical imagery is the first database of this kind, and could help the future surgeons to enhance their responsive techniques, speed, accuracy and precision during the surgery.

Recently, usage of robot assistants in surgery is increasing rapidly, especially in oncology and complicated procedures, and the success rate is similar to humans’. The introduction of minimally invasive techniques (laparoscopic surgery) using robotic automatization has been described as “the most dramatic change in surgery since the introduction of anesthesia”. The 3D, high-definition imaging of robotic technology increases the vision of the operation field and makes depth perception possible as well, since the surgeon controls the camera and magnification (up to 10 times). There is the so-called dual console mode (nothing to do with Playstation, sorry!), where a trainer can control the trainee and collaborate in various modes such as the swap one, or the nudge one, where the former enables them to switch control the robotic hands, and the latter enables them to use two robotic hands to guide the trainee during the procedure. This area of research recently received a telementoring innovation for remote assistance, which could be further enhanced with augmented-reality systems.

However, before medical experts and computer vision programmers can use machine learning and face tracking techniques to do automated facial reconstruction or aesthetic and reconstructive surgeries, a high-quality comparison is needed to truly evaluate robotic surgery using multi-centre randomized clinical trials and more data. It seems that so far nose jobs are better left to humans, since training on animals and cadavers have shown promise, but not with a negligible rate of failure.

_______________

References: