Evaluating Minimally Invasive Surgery (mis) Assessment Metrics

Sami AbuSneineh, PhD, Brent Seales, PhD

University of Kentucky

The emerging maturity of camera-based systems has the potential to transform solutions to problems in many different areas, including MIS. In this work we built a novel computer-vision based platform to study the problem of the automatic skills assessment of MIS trainees. This system is built using a network of sensors in order to extract a large number (more than 50) of assessment metrics and their relationships to the surrounding environment. This study goes beyond the standard metrics studied in previous research, and studies the relationship between a large number of new metrics within the environment. We show that a number of new metrics have a high correlation coefficient, indicating that they can be composed with previously studied metrics to improve assessment accuracy.

The system integrates measurements of the surgical tools, the surgeon’s body movements (arms, head, eyes), and heart rate factors. These measurements are coordinated, captured and measured using advanced vision technology in concert with off-the-shelf technology, such as an eye tracker and ECG monitors. Individual metrics as well as combined, or “fusion” metrics, support a new approach for studying the assessment problem based on the complex relationship between different kinds of surgical motion, a granular approach for collecting metrics, and the development of a better combined metric set.

The monitoring system contains a number of camera sensors to capture the motion of the surgical instruments and trainee’s head, arms, eyes, and ECG rate synchronously. Eight cameras are dedicated to tracking the surgery tools and the trainee’s arms and head; two cameras attached to the surgical display track the trainee’s eyes; and an ECG attached to the trainee’s body tracks the heart rate. A robust synchronization algorithm coordinates these measurements to sixteen milliseconds. The coordinated sensor environment allows the extraction of fifty-six low-level metrics. We applied this to fifty-eight subjects of different skill levels performing the pegboard ring transfer task. The base-level coordinated measurements (non-fusion measures) are combined to allow higher-level measurements (fusion measures) such as: Kinematics data, Fatigue level , Eye features, Blind motion, Gaze direction, Total looking time at various objects.

The correlation coefficient between the skill levels and each metric is calculated. We found eighteen metrics are statistically significant with a correlation coefficient larger than 0.5. Many of these metrics are novel. Figure-1 shows the list of metrics and the absolute correlation to the skill level. Using a set of metrics with a correlation higher than 0.5 to cluster the fifty-eight subjects results in a clustering error rate of just 3.4%

Figure-1. The absolute correlation between the measured metrics and the skills level.

In this study we have built a platform that integrates multiple sensors to observe and extract data from the training environment. This platform gives the ability to study metrics that have not been previously used. We found set of new metrics that have a statistically significant correlation with the trainees’ skill level. We also found that metrics related to speed and acceleration, which have been traditionally relied upon, have a low correlation coefficient.

Session: Poster Presentation

Program Number: P381

« Return to SAGES 2013 abstract archive