Tiffany C Cox, MD, Kristen Trinca, MD, Jonathan P Pearl, MD, E. Matthew Ritter, MD
Norman M. Rich Department of Surgery, Uniformed Services University / Walter Reed National Military Medical Center, Bethesda Maryland. Department of Surgery, University of Maryland, Baltimore MD
Introduction: Virtual reality(VR) simulators have dominated the assessment of endoscopic skills. While VR simulators have significant benefits, they are frequently limited by high startup and maintenance costs, suboptimal durability with heavy use, and difficulty creating the "real feel" of GI endoscopy. These limitations led us to develop our physical model for endoscopic skills assessment, similar to models seen in other aspects of surgical skills assessment and training. The Simulated Colonoscopy Objective Performance Evaluation (S.C.O.P.E.) was developed to fill the need of a lower cost, non-VR based, valid assessment tool. The purpose of this study was to evaluate the ability of this new tool to objectively assess endoscopic skills.
Methods: Four tasks were created to evaluate the core skills for diagnostic endoscopy using the Kyoto Kagaku colonoscopy model (Kyoto Kagaku Co Ltd, Japan) as a base platform. The four tasks include: Scope Manipulation requiring use of torque and tip deflection to align a shape in the colon with a matching shape on the monitor screen. Tool Targeting requires coordination with biopsy forceps to contact a metal target. Loop Management requires prevention, recognition and reduction of a redundant sigmoid colon with navigation to the cecum. Mucosal Inspection requires identification of simulated polyps placed randomly throughout a length of simulated colon and rectum, including retroflexion. Key performance metrics were identified and a scoring system developed based on these parameters. Scores for each task were normalized to allow equal weighting for all four tasks. Thirty-five subjects were recruited for this prospective study and stratified into 3 cohorts based on colonoscopy experience: novice (0-50 colonoscopies)( n=11), intermediate(51-139)(n=13), and experts (>140)(n=11). Subjects performed 2 trials of all 4 of the above tasks. Mean normalized scores were compared between groups for both the individual tasks and the total S.C.O.P.E. score by one way ANOVA. Test-retest reliability was determined using intraclass correlation coefficient.
Results: Across all four tasks, experts (E) consistently outperformed intermediates (I), who, in turn, outperformed novices (N). These differences were statistically significant for all tasks. Mean normalized scores with 95% confidence intervals for each group on each task are as follows: Scope Manipulation [N-54 (26-82), I-90 (77-104), E- 106 (93-118) , p=0.0007], Tool Targeting [N-40 (24-55), I-79 (65-93), E-88 (72-105) , p < 0.0001], Loop Management [N-51 (24-79), I-78 (57-99), E- 101(98-105), p=0.003], Mucosal Inspection [N-73 (53-92), I-87 (77-96), E-100 (91-108), p= 0.013], and Total S.C.O.P.E. Score [N- 218(155-280), I-334 (296-372), E-395 (371-419), p<0.0001]. Initial Test – retest reliability for the expert Total S.C.O.P.E. score was respectable at 0.6.
Conclusions: A non-virtual reality, simulation based assessment tool has been created to evaluate the skills required to perform diagnostic endoscopy. Validity evidence shows that scores on these tasks can differentiate between groups expected to have different levels of technical skill. This model shows promise as a low technology tool for objective assessment or training of endoscopic skills. While larger scale validity evidence is needed, the S.C.O.P.E. model shows promise for potential incorporation into programs requiring objective assessment of endoscopic skills, like the Fundamentals of Endoscopic Surgery.
Session: Podium Presentation
Program Number: S111