Background: Intraoperative appreciation of visible anatomy along with awareness of underlying structures and vasculature is invaluable to the operating surgeon. The advent of minimally invasive techniques, with reduced tactile feedback and limited visual displays has only heightened the need for improved visualization of target anatomy and adjacent but visually imperceptible structures. Current laparoscopic images are rich in surface detail but provide no information on deeper features. We are developing a novel method of performing laparoscopic surgery using a 64-slice computed tomography (CT) scanner with continuous scanning capability. This study describes our work to date to produce an augmented reality (AR) image that instantaneously renders intraoperative CT images with the live images from the laparoscope.
Methods: Under an Institutional Animal Care and Use Committee (IACUC)-approved protocol, we conducted a series of CT-guided laparoscopic operations using a non-survival porcine model. A fully equipped laparoscopic surgical suite was assembled within the CT scan room. A multidisciplinary research team comprised of minimally invasive surgeons, radiologists, and biomedical engineers contributed to study design and conducted the experiments. We employ a 64-slice CT scanner with continuous scanning capability to image the surgical field approximately once per second. An infrared detection system tracked the position of a specially-equipped laparoscope in order to reconcile the laparoscopic view with the corresponding 3-D CT image. Laparoscopic operations performed included peritoneoscopy, cholecystectomy, hepatic wedge resection, and gastrorrhapy, with intraoperative CT scanning. Deformable image registration (alignment) techniques and low-dose reconstruction methods allow intraoperative CT scanning at 25 mAs, roughly 10 times lower than the standard diagnostic dose. Using commercially available software, we generate an AR image that merges recontructed intraoperative CT with images from the laparoscope.
Results and Conclusions: Through a series of six operative experiments, we have amassed a data set that includes rendered video and laparoscopic images, demonstrating the feasibility of merging optical surface information with ragiographically imaged deep anatomic features (Fig 1). Our method represents an accurate, instantaneous high refresh-rate approach to AR, which we have termed “live AR.” These initial experiments represent the first use of a new surgical visualization capability, with potential to significantly enhance operative performance.
Session: Podium Presentation
Program Number: S057