Here, we focus on the possible advancements that may change the future landscape of robotic surgery.

The future of robotic surgery will take this current platform forward by improving haptic (touch) feedback, improving vision beyond even the magnified eye, improving robot accessibility with a reduction of entry ports and miniaturizing the slave robot.

The use of the current robotic system continues to be further refined. Increasing experience has optimized port placement reducing arm collisions to allow for more expedient surgery. Improved three‐dimensional camera magnification provides improved intraoperative identification of structures.

Robotics of the current day have advanced significantly from early computer‐aided design/manufacturing systems to modern master‐slave robotic systems that replicate the surgeon's exact movements onto robotic instruments in the patient.

Robotic surgery in its current form has established itself as a viable treatment option for several indications and the gold standard for a few indications. This has occurred because of the improved technology built into the robotics system. This paper looks at the brief history of robotics in surgery. Then we review in more detail some of the future possible additions to the technological armamentarium that might significantly improve the ways that surgeons perform robotic surgery.

Abbreviations

CA(D)(M) computer‐aided (design) (manufacturing) FDA USA Food and Drug Administration (2)(3)D (two) (three)‐dimensional AESOP Automated Endoscopic System for Optimal Positioning RP radical prostatectomy RA robot‐assisted MPM Multifocal photon microscopy LESS laparoendoscopic single‐site surgery NOTES natural orifice translumenal endoscopic surgery MAGS magnetic anchoring and guidance system.

ROBOTICS OF THE PRESENT DAY The Czech playwright Karel Capek is credited with introducing the word ‘robot’ in his play Rossum's Universal Robots in 1921 1, 2. The word stems from the Slavic word robota meaning serf labour. Later popularized by science fiction writer Isaac Asimov in the 1940s 1, robotics finally became reality in 1961 with the first industrial robot, UNIMATE, at a General Motors factory assembly line in Trenton, NJ, USA 3. Whereas industrial robots typically are used to operate in areas that are dangerous or not easily accessible by humans, medical surgical robots were first introduced in the 1980s to augment the medical staff by imparting superhuman capabilities: high motion accuracy and enabling interventions that would be otherwise physically impossible 4. Early surgical robots were computer‐aided design/manufacturing (CAD/CAM) systems 5. These used pre‐fixed anatomical landmarks as points of recognition and registration by the computer to allow movement within set confines. The rigid and predictable behaviour of bone was first exploited 1. RoboDoc (Integrated Surgical systems, Sacramento, CA, USA), first used in humans in 1992, incorporated prior two‐dimensional (2D) fluoroscopic imaging to improve placement and dimensional accuracy of prosthetic implants by robotic drilling and bone preparation 1, 4-6. NeuroMate was USA Food and Drug Administration (FDA) approved in 1997 (Integrated Surgical Systems) to assist in stereotactic functional brain surgery based upon preoperative head imaging 1, 2. Robotics was first introduced in urological surgery in the late 1990s for both prostate and renal access. ProBot (prototype from Imperial College, London, UK) was a robotic resection device with seven degrees of freedom designed for automated TURP for BPH 2, 7, 8. Meanwhile, PAKY‐RCM (Percutaneous Access to Kidney–Remote Centre of Motion) and AcuBot were both developed at Johns Hopkins University. These robots transformed 2D biplanar fluoroscopy images into its own 3D robotic space for precise percutaneous renal access 2, 7, 9, 10. Variations of the CAD/CAM robots have been used in many subspecialties of medicine, combining various imaging methods with the precision of robotics. A 3D ultrasound‐guided robotic needle placement can now even account for cardiac and respiratory motion reducing invasiveness and user bias 11. The modern age of surgical robots began with robotic systems using continuous input from surgeons to change their movements according to input in real time 2. In 1993, Automated Endoscopic System for Optimal Positioning (AESOP; Computer Motion Inc., Goleta, CA, USA) was the first FDA‐approved endoscopic manipulator 2. AESOP manoeuvres the endoscopic camera according to the surgeon's commands transmitted by either foot pedals or voice alone. With advances in robotic engineering, the integrated master‐slave systems were developed allowing very complex minimally invasive surgery to be performed. The ZEUS robotic system (Computer Motion Inc.) combined an AESOP unit with two robotic manipulator arms. A surgeon seated at a console used polarizing glasses to view a flat screen to gain a 3D image and manipulated handles to control the slave robot. Additional abilities of voice control integration and telemonitoring were provided 2, 7. FDA approval was granted in 2002. However, Computer Motion Inc. was merged with Intuitive Surgical Inc. (Sunnyvale, CA, USA) and the ZEUS system was discontinued. The da Vinci® Surgical System (Intuitive Surgical Inc.) emerged as the state‐of‐the‐art telesurgical system. This master‐slave robotic system replicates the surgeon's exact movements on the master controls onto robotic instruments in the patient using their EndoWrist® technology 2, 7. A binocular lens and camera system transmits magnified 3D images to the surgeon console. In 2000, it was cleared by the FDA for use in general laparoscopic surgery 6, 12, followed by clearances in 2001 for radical prostatectomy (RP) and 2005 for urological surgical procedures 12. The most recent edition, the da Vinci Si, was launched in April 2009 introducing improved high‐definition imaging and further streamlining of the entire system. The system also allows the addition of a second surgeon console for surgical training or combined two surgeon procedures 12. As of the end of 2010, 1752 da Vinci systems were installed in >1500 hospitals, in 44 countries around the world: 1285 in the USA, 316 in Europe, and 151 in the rest of the world 12, 13. Globally >300 000 robotic procedures were completed in 2010, including ≈ 98 000 robot‐assisted RPs (RARPs) 12, 13. Pioneering work assessing the feasibility of robotic surgery with RARP has progressed in the last decade to RA radical cystectomy and RA renal procedures, including partial nephrectomy, pyeloplasty, and nephroureterectomy with excision of bladder cuff. Broadening applications of robotics for urological procedures are being investigated in both adult and paediatric urology with efforts published for RA pyelolithotomy and management of urolithiasis, robotic management of vesico‐vaginal fistula, and RA ureteroneocystotomy and ureteric tapering 14. The use of the current robotic system continues to be further refined. Increasing experience has optimized port placement reducing arm collisions to allow for more expedient surgery. Improved 3D camera magnification up to ×15 provides improved intraoperative identification of structures 14. Robotics has probably improved the learning curve of laparoscopic surgery while still maintaining its patient recovery advantages and outcomes. The future of robotic surgery will take this current platform forward by improving haptic (touch) feedback, improving vision beyond even the magnified eye, improving robot accessibility with a reduction of entry ports and miniaturizing the slave robot. Here, we focus on the possible advancements that may change the future landscape of robotic surgery.

ADVANCES IN HAPTIC FEEDBACK Robotic arms allow the surgeon to precisely manoeuvre surgical instruments with high‐degree‐of‐freedom movements. One shortcoming frequently discussed is the lack of haptic feedback. Haptics describes touch feedback, which includes both kinaesthetic (forces and positions of muscles/joints) and cutaneous (tactile) feedback encompassing distributed pressure, temperature, vibration, and texture 15, 16. Surgical techniques rely on precisely handling tissue. Sensory feedback of haptic cues is considered an essential part of open surgery. Robotic surgeons have thus far compensated using the improved visual feedback cues to estimate forces, but fine manipulation may still be compromised with diminished haptic feedback. Efforts to return haptic feedback to the surgeon require both artificial haptic sensors on the patient‐side to acquire information and an interface to convey this information to the surgeon 15. A fundamental limitation is the trade‐off between system stability and transparency for force feedback, where small errors and delays in the system can cause uncontrollable oscillations and instability in the surgical display 15. Furthermore, a dexterous robot has seven degrees of freedom of movement including translational, rotational and gripping movements. All degrees of freedom cannot be actuated on the master console signifying that the system cannot provide force feedback in certain directions. This effect may be negligible or detrimental depending on the directions of force feedback lost 15. Kinaesthetic or force feedback systems are commercially available but in surgical practice are severely limited by the constraints on size, geometry, cost, biocompatibility, and sterilizability 7, 15. Researchers have created specialized grippers that can attach to the jaw of the existing instruments. Applied force by the surgeon is reduced with this force feedback, thereby reducing potential tissue damage 17. The effectiveness of haptic feedback on surgeon performance in phantom patients has been tested by several researchers. These preclinical tests have shown force feedback to reduce forces without a significant increase in trial time 15, 17. The problem again becomes the cost‐benefit ratio of these tools whether by modifying current instruments or re‐engineering the instruments altogether. One recent solution is VerroTouchTM, a haptic sensation system under development by Kuchenbecker et al. 18 (Fig. 1) 19. VerroTouch is a mechanically customized add‐on system that attaches onto the da Vinci S robotic system arms beneath the sterile drapes. By moving the haptic sensors from the instruments to the robotic arms, the sensors do not make patient contact and therefore do not require sterilization or reprocessing. VerroTouch analyses high‐frequency accelerations in the robotic arm movements and processes these accelerations in real time. Vibrotactile feedback is then provided as a combination of both naturalistic high‐frequency vibrations at the surgeon's hand controls and/or stereo sound 18. Figure 1 Open in figure viewer PowerPoint VerroTouch system components integrated with the Intuitive Surgical da Vinci S Surgical System. Vibration sensors on the robotic arms are analysed and then reproduced on the vibration actuators on the surgeon console. (Permission Requested From McMahan W. Tool Contact Acceleration Feedback for Telerobotic Surgery, IEEE Transactions on Haptics 19.) Surgeon trials have shown that surgeon responses to audio and direct haptic feedback on the master controls are generally positive. Negative responses reflect the importance of filtering real haptic sensation from background friction within the robotic system itself 19. Early in vitro and animal studies 20 showed that with VerroTouch haptic feedback even expert robotic surgeons minimize tissue forces, theoretically improving surgical outcomes. Whereas the future of force haptic feedback is promising, very little research has been reported for tactile haptic feedback including detection of local mechanical properties of tissues such as compliance, viscosity, and surface texture 15. A future combining robotic precision with human haptic feedback may allow the best qualities of minimally invasive and open surgical techniques to be exploited.

ADVANCES IN VISION High‐definition robotic vision continues to improve with finer resolution via shrinking electronic components. Further advances are underway to enhance surgical vision beyond even the magnified eye of the surgical robot. This can be achieved via two approaches: (i) combining the surgical field with adjunct real‐time imaging or (ii) improving visual resolution beyond the surface anatomy to visualize anatomical structures (vessels, nerves) or small tumours difficult to see with the naked eye. Augmented vision is the concept of integrating computer‐generated images from preoperative studies overlaid onto the live video image during surgery 21. This image‐guided surgery is typically based on bony landmarks and hence possible in neurosurgery, maxillofacial surgery, and orthopaedics. Efforts to extrapolate its use to abdominal surgery becomes challenging due to the deformable viscera and constant shifting from breathing and movement of surgical instrumentation 7, 21. Progress in this field has been reported recently in minimally invasive renal surgery 21. New tracking systems are being developed to achieve dynamic real‐time overlay onto a surgical field by accounting for the dynamic movement of the target organ. A 3D positional correlation between the overlaid images and the surgical instruments becomes feasible without the limitations of using only real‐time data acquisition. This allows CT or MRI image overlay depending on the goals of surgery. This surgical navigation has been demonstrated during both laparoscopic partial nephrectomy and laparoscopic nerve‐sparing RP 21. Ukimura and Gill 21 are further developing this navigation software to improve the precision and function of the augmented reality visualization system. A body‐‘global positioning system’ has been introduced as a new organ‐tracking system. This may soon allow for predictive navigation systems where ‘surgical radar’ will predict the ideal surgical plane before performing the actual surgical manoeuvre. A colour‐coded zonal navigation model would then be overlaid on the surgical field to help achieve better oncological and functional outcomes (Fig. 2) 21. Figure 2 Open in figure viewer PowerPoint Example of augmented reality navigation during laparoscopic partial nephrectomy. A, Original CT images (left); four‐colour coded surgical model: red zone indicates the tumour, yellow zone is a 5‐mm safe margin, green zone is the surgeon's target plane of dissection, blue zone is normal renal parenchyma to be maximally preserved (right). B, Augmented reality image is superimposed onto the real‐time surgical view. (Permission Requested From Ukimura O, Gills IS. Image‐Fusion, Augmented Reality and Predictive Surgical Navigation. Urol Clin N Amer. 2009; 115–123 21.) Other methods have been explored for real‐time image and anatomical localization data acquisition. Intraoperative nerve stimulation and tumescence monitoring with CaverMap (Blue Torch Corporation, Boston, MA, USA) 22 has been used to demarcate cavernous nerves during RARP. Power‐Doppler TRUS (B‐K Medical, Copenhagen, Denmark) has been used to identify and confirm pulsatile blood flow within the neurovascular bundles whose preservation correlates with superior erectile function recovery 23. Likewise, intraoperative ultrasound is widely used to identify and demarcate resection edges during RA partial nephrectomy. Multiple‐input display is supported by the da Vinci surgeon console using the TilePro® system to allow ‘picture in picture’ viewing. Drawbacks include the necessity of additional surgical personnel to manipulate the imaging device and the potential requirement of an additional assistant port. During RARP, intraoperative TRUS navigation allows for: (i) identification of hypoechoic prostatic nodules, (ii) precision during lateral pedicle transaction and neurovascular bundle release, (iii) calibrated wider dissection at the site of suspected extracapsular extension, (iv) tailored dissection of the individual prostate apex and (v) facilitation of the posterior bladder neck transection (Fig. 3) 24. Used in this setting, TRUS during laparoscopic RP allowed intraoperative prediction of pT2 vs pT3 disease with 85% accuracy and a statistically significant decrease in positive margins in pT3 disease by following real‐time recommendations of calibrated wider site‐specific dissection 25. Figure 3 Open in figure viewer PowerPoint A, TRUS shows hypoechoic cancer nodule occupying entire prostate apex. B, Real‐time TRUS monitoring of apical dissection in same patient with apical nodule (yellow dotted line). Laparoscopic scissors tip was monitored in real time by TRUS. C, Laparoscopic view shows apical dissection. D, Pathological study reveals striated muscle fibres, indicating normal apical periprostatic soft tissues, as prostate apex is generally devoid of typical prostate capsule. (Permission Requested From Ukimura & Gill. Real‐time Transrectal Ultrasound Guidance During Nerve Sparing Laparoscopic Radical Prostatectomy: Pictorial Essay. J Urol 24.) Manual TRUS manipulation discards potentially important positional data. Han et al. 26 have developed a robotic TRUS probe manipulator (TRUS robot) and 3D‐reconstruction software to be used concurrently with the da Vinci surgical robot in a tandem‐RA laparoscopic RP. This TRUS robot allows the surgeon to manipulate the TRUS guidance without additional personnel required to manipulate the TRUS in the limited space bound by the patient's legs, operative table and the da Vinci surgical robot (Fig. 4) 26. The additional positional information also allows for precise volume measurement, 3D reconstruction, and navigation display 26. Figure 4 Open in figure viewer PowerPoint A and B, Schematic setup for tandem‐RA laparoscopic RP. C, Rotary scan of the prostate volume, ultrasound image showing colour Doppler activity and 3D model of the prostate gland and blood vessels in the neurovascular bundle (NVB). Images acquired with Hitachi HiVision 6500, lateral fire endorectal probe EUP‐U533 at 10.5 MHz. (Permission Requested From Han M, Kim C, Mozer P et al. Tandem‐robot Assisted Laparoscopic Radical Prostatectomy to Improve the Neurovascular Bundle Visualization: A Feasibility Study. Urology 26.) Novel technologies are being developed that move us beyond traditional imaging into the realm of microscopic surgical vision. Multifocal photon microscopy (MPM) 27 allows real‐time intraoperative histopathology without needing excision or administration of contrast agents (Fig. 5). MPM enables the imaging of fresh, living, unprocessed tissue utilizing intrinsic tissue emissions by using excitation of two low‐energy photons to cause non‐linear excitation. Tissue autofluorescence from intracellular molecules and second harmonic generation from non‐centrosymmetric tissues generates distinctive optical signals allowing imaging at sub‐micron resolution. Using each tissue's unique signatures, MPM can identify all relevant prostatic and periprostatic structures including nerves, blood vessels, capsule, underlying acini and also pathological changes such as prostate cancer. Ex vivo testing has shown MPM to be comparable to the ‘gold standard’ haematoxylin and eosin‐stained histopathology of the same specimen 27. This allows detailed feedback beyond pre‐surgical imaging and predictors without requiring the tissue destruction and time‐delay of intraoperative frozen sections. An early custom‐made MPM system has proven safe and effective in imaging of the prostate and periprostatic tissue. However, the system requires further miniaturization and integration with the robotic surgical platform for further study in a true surgical setting 27. Figure 5 Open in figure viewer PowerPoint MPM imaging of periprostatic fascia obtained from intraoperative surgical margins. A and B, Lateral prostatic fascia showing a large artery (a), connective tissue (b) and fat (c). A, MPM image. B, Histology slide with Weigert's stain to identify elastin. C, Surgical apical margin showing a small nerve (arrow). Small arrowhead points to collagen, large arrowhead to elastin, in the connective tissue. Colour‐coding of all MPM images: red, second harmonic generation; green, autofluorescence between 420 and 530 nm. Scale bars: A,C 500 µm. (Permission Requested from Tewari AK, Shevchuk MM, Sterling J et al. Multiphoton microscopy for structure identification in human prostate and periprostatic tissue: Implications in prostate cancer surgery. BJU International 27.) In February 2011, Intuitive Surgical Inc. received FDA clearance for a da Vinci fluorescence imaging system allowing surgeons to image vasculature in 3D beneath tissue surfaces in real time 12. Phase I studies are underway to evaluate its use with the i.v. administration of indocyanine green to optimize near infrared imaging of cortical renal tumours 28. Ultimately, the future success of any advanced imaging systems lies with its ability to simply enhance surgical vision without becoming overly cumbersome.

ACKNOWLEDGEMENTS The authors would like to thank all those institutions and journals who allowed illustrations of their robotic advancements included in our manuscript. We would like to further acknowledge those institutions providing video segments to further detail their contributions in robotic technology: Dr Katherine Kuchenbecker, Karlin Bark and colleagues at the University of Pennsylvania for VerroTouch™, Dr Misop Han and colleagues at Johns Hopkins University for tandem‐RA laparoscopic RP, and Dr Ash Tewari, Dr Abishek Srivastava and colleagues at the Weill Cornell Medical College.

CONFLICT OF INTEREST None declared.