An Anthro-Centric Multisensory Interface for Sensory Augmentation of Tele-Surgery

Anil K Raj, MD, Adrien Moucheboeuf, Andrew Holmgren, Timothy L Hutcheson, Joshua D Cameron, David V Lecoutre, Thomas A Vassiliades, MD MBA. Florida Institute for Human and Machine Cognition

Objective of Technique: While telerobotic surgical systems augment minimally invasive surgery techniques and reduce surgical trauma, they increase surgeon workload by limiting sensory feedback. Unlike open or laparoscopic procedures, surgical robots isolate the surgeon from tactile, proprioceptive, kinesthetic and orientation cues, which provide non-visual inputs that help maintain situation awareness (SA). The robot control system, however, employs sensors for torque, position, velocity and/or strain sensors to maintain accurate closed-loop servo motion, which could help inform the surgeon’s SA. Though sitting at a console compares favorably to standing astride an operating table using laparoscopic instruments, removing all restrictions on surgeon motion and posture would allow telerobotic surgeonsto move freely and reduce both physical and cognitive fatigue. Lastly, the surgeon must change modes between control of primary instruments and any additional arms (e.g., third effector, endoscope, etc.), but current robot interfaces provide little, if any, feedback to help the surgeon maintain awareness of such changes.

The objective of the Anthro-Centric Multisensory Interface for Sensory Augmentation of Tele-Surgery (ACMI-SATS) project seeks to restore to telerobotic surgery, many of the sensory and kinesthetic cues available in open surgical procedures. This improves the effectiveness of tele-surgery by allowing a surgeon to utilize tactile, spatial audio and three-dimensional visual cues as well as meaningful proprioceptive and kinesthetic information

ACMI-SATS simulationDescription of the methods: The Florida Institute for Human and Machine Cognition (IHMC) developed the ACMI-SATS system with both simulated and actual surgical robot systems. It provides an integrated architecture with a wide field of view pseudo-three dimensional visual operative field, surround sound for spatially relevant audio when selecting instruments and multiple tactile interfaces that represent instrument dynamics. ACMI-SATS provides a natural free motion control interface that uses a wearable, wireless motion-capture system to track head, torso, arm, wrist and finger movements. Natural motions drive the movement of the robotic instruments and the endoscope. The surgeon can stand or sit without any external restrictions and proprioceptive and kinesthetic sensations map to the visual presentation at a 1:1 scale. Motion capture enables gestural control to manage mode changes (such as clutching or switching control to a different arm).

Preliminary results: Initial evaluations on the ACMI-SATS test bed indicate that novices (non-surgeons) can learn to control and manipulate the robotic end effectors to perform minimally invasive surgical training tasks in simulation quickly and with low cognitive effort.

Conclusions/Expectations: The sensory interfaces augment understanding by providing additional information to the surgeon intuitively without overloading visual or auditory sensory channels. The motion-capture and gestural control interface allows the surgeon to use more natural, unrestricted movements control robot actions and mode changes. By mapping the visual scale directly to motion of the arms, the surgeon can maintain awareness of instrument positions even when they are no longer with the endoscope field of view. ACMI-SATS can be integrated with any surgical robot platform and could allow surgeons to learn and perform more procedures in a given time frame, with less effort and with fewer surgical errors.

Session: Poster
Program Number: P495
View Poster

« Return to SAGES 2011 abstract archive