Gpu-Accelerated Real-time Tissue Reconstruction for Semi-Automated In-vivo Surgery

Jedrzej Kowalczuk, MS, Jay Carlson, BS, Eric T Psota, PhD, Lance Perez, PhD, Shane Farritor, PhD, Dmitry Oleynikov, MD. University of Nebraska Medical Center, University of Nebraska-Lincoln

 

 Introduction: The objective of this study is to demonstrate the feasibility of a stereoscopic video system for three-dimensional reconstruction of the surgical environment. The availability of accurate real-time three-dimensional reconstruction has the potential to enable vision-driven navigation of surgical robots and the automation of low-level surgical tasks. Here, a surgical vision system is proposed that uses a custom miniaturized stereoscopic video camera and a highly-accurate GPU-accelerated stereo matching algorithm to create a computer model of the surgical environment in real-time.

Methods and Procedures: To facilitate real-time reconstruction of the surgical environment, the stereoscopic video camera is positioned in front of live tissue within the normal viewing range of the surgical robot (5-10cm). The stereo video frames are captured at a rate of 30 frames per second, and processed by the GPU-accelerated stereo matching algorithm to produce a dynamic model of the environment. In order to demonstrate the accuracy of the reconstruction, several different synthetic views of the surgical environment are reproduced by overlaying the color images on the three-dimensional model, and then compared to actual images taken from the same viewpoints.

Results: The real-time three-dimensional reconstruction was evaluated in a non-survival procedure on a porcine model that was performed at the University of Nebraska Medical Center and was approved by the institutional review committee. The procedure involved positioning the stereoscopic video camera directly in front of the operating theatre, and recording 10 seconds of high-definition video with variations in movement and orientation of the camera in order to approximate the viewing conditions that would occur during in-vivo surgery. To allow for subjective evaluations, the reconstructed model was rotated and rendered from different positions and compared to actual images obtained from those viewpoints. The results demonstrate that the system is capable of producing an accurate model of the environment in real-time.

Conclusions: It was shown that the proposed stereoscopic imaging system is capable of capturing and processing high-definition stereo video within the surgical operating environment. In addition to using the video streams captured by the stereoscopic camera to provide the surgeon with depth perception, the video is also used to reconstruct the surgical environment in three-dimensions.

Results show that the system is capable of accurately reproducing the environment and providing realistic, synthetically rendered viewpoints of the operating theatre. The ability to produce real-time accurate three-dimensional reconstruction is a significant advancement towards the future automation of low-level surgical tasks.


Session Number: SS22 – Robotics
Program Number: S125

« Return to SAGES 2012 abstract archive