Development of Kinect-based Training System for Laparoscopic Surgery

Daisuke Miki, BEng, Takuro Ishii, MEng, Kazuya Kawamura, PhD, Hiroshi Kawahira, MD, PhD, Tatsuo Igarashi, MD, PhD

Department of Medical System Engineering, Faculty of Engineering, Chiba University, Chiba, Japan

Objective of the technology
The minimally invasive surgery that includes NOTES and SILS requires surgeons to handle delicate surgical operations with narrow vision and limited perception. In order to support faster learning of perception of depth and perspective direction during the manipulation of surgical tools, development of a compact and smart training system which can be set up on a desktop instantly and provide a quantitative evaluation to assess the ability of hand-eye coordinated operation and feedback from subject’s experiences is still desired and is a challenging issue. In this study, we aim to develop a surgical training system which makes it possible to provide a multi-angle evaluation and intuitive visual feedback of surgical procedures by tracking and recording the orientation of the surgical instruments, realizing smaller package, lower cost, and easy setup.

Description of the technology and method of its use or application
Kinect (Microsoft inc.) was employed to detect the 3D position and posture of surgical instruments. Moreover, it also gathered spatial structural information of surgical fields including the surgeon’s body and the patient’s abdominal wall. Since Kinect has two types of eye, depth and color, and it acquires these two parameters for each point in its visual field simultaneously, we have developed optical markers and image processing algorithm for tracking of the forceps and the laparoscope. The proposed system can track 3 surgical instruments simultaneously. Finally, on the PC monitor, environmental structure acquired with Kinect and virtual forceps calculated with the position of the detected markers were combined into one vision. To evaluate our system, the forceps, which tips had been fixed on a certain point on a table, have been operated randomly. We gathered the calculated 3D position of the forceps and compared it to the actual position. We have tested our system in an animal experiment to confirm usability and efficacy of the visual feedback of intraperitoneal forceps position.

Preliminary Results
The system depicted the 3D model of forceps and endoscope on computer based on the tracked position data. It allows us to recognize intuitively the motion of surgical instruments that are normally invisible. The positional error of the proposed method was 1.9 ± 9.2 [mm]. Considering usability evaluation under the operating room environment, the system ran in 10fps, and we confirmed it could draw surgical instruments and the surface of body and surgeon simultaneously. That showed the effectiveness of this system in a clinical trial.

Conclusion/Future Direction
We developed this Kinect based surgical instruments tracking system and made it possible to indicate the motion of forceps for the surgeon intuitively. Using this system, novice surgeons can learn the position of surgical instruments and distance perception more effectively. Kinect senses the existence of the patient automatically and detects motion by extracting the skeleton-model of the patient. We utilize the skeleton tracking system in order to improve our training system, enabling us to evaluate the comprehensive motion of novice surgeons more efficiently by comparing it to the professionals.
The 3D information of surgical instruments and surface information of environments

Session: Poster Presentation

Program Number: ETP052

« Return to SAGES 2013 abstract archive