Multiple Sensors System to Evaluate Minimally Invasive Surgery Trainees

Sami Abusnaineh, MS, Brent W Seales, PhD. University of Kentucky


 The typical techniques for assessing the performance of minimally invasive surgery (MIS) for surgeons and trainees are direct observation, global assessments, and checklists. These techniques are more subjective than objective and provide for a large margin of bias. Therefore, evaluating the skills of MIS trainees objectively has become an increasing concern. MIS technical skills assessment can be improved using the new technology of computer vision and cameras. The integration of tools, arms, head, eyes, and ECG factors using advanced vision technology and strong statistical model will lead to great improvements to MIS technical skills assessments. The goal of this study is to build a new multiple sensors framework to extract novel fusion and non-fusion metrics to evaluate MIS trainee’s performance.
Using multiple vision systems to study the assessment problem leads to extracting the relationship between different kinds of motions and developing a better metrics set for the assessment. For example, studying the motion and direction of the trainee’s head would lead to better assessment factors because it reveals the interaction between the surgeon and the surgery field of view. Also, tracking the surgeon’s eyes could lead to reliable assessment metrics such as fatigue and eye-hand coordination. The framework contains several camera sensors to capture the motion of the surgical instruments and trainee’s head, arms, eyes, and ECG rate synchronously. The sensors used are: Eight cameras installed in a circle on the room ceiling to track the trainee’s arms and head; two cameras attached to the surgery display to track the trainee’s eyes features; Laparoscopic camera used to track surgery instruments; finally, an ECG attached to the trainee’s body to track the heart rate. A robust synchronization algorithm is developed to synchronize the capture of these systems to sixteen milliseconds which makes the frame offset between subsystems one. Figure1 explains the architecture of system.
By coordinating those subsystems, we were able to extract a list of novel fusion and non-fusion metrics for the assessment. Some of these metrics are based on:
• Kinematics data such as path, speed, acceleration, rotation, and working volume for laparoscopic instruments and trainee’s head and arms.
• Fatigue level and the relationship between the fatigue and the motion features.
• Eye saccade, closure, blinking change, object interaction.
• Path, speed, acceleration, rotation, working volume for the instruments and the trainee’s arms while the instruments are not visible or the trainee is not looking at the display.                                                                                                          Fig.1 The Framework’s Architecture
• Gaze direction and the frequency at which the trainee changes the look between the display and the incision location.
• Total looking time at different objects in the environment.
In this study we have presented a framework using multiple sensors to evaluate MIS trainees. This framework is novel because it is the first system to integrate the tracking of several objects at a time. This integration led to extract non-fusion metrics by studying the object’s motion coordination. Those novel metrics are promising and can lead to valid and reliable correlation to the experience level.

Session Number: Poster – Poster Presentations
Program Number: P611
View Poster

« Return to SAGES 2012 abstract archive