Ghada Enani, MD1, Pepa Kaneva, MSc2, Yusuke Watanabe, MD3, Elif Bilgic, BSc2, Amani Munshi, MD4, Melina Vassiliou, MD2. 1University of Manitoba, 2Steinberg-Bernstein Centre for Minimally Invasive Surgery, McGill University, 3Department of Gastroenterological Surgery II, Hokkaido University Graduate School of Medicine, 4University Hospitals St. John Medical Center Lives in Cleveland, Ohio
Introduction: Laparoscopic suturing(LS) is a challenging and its complexity and nuances, including the component of intraoperative decision making, are not modeled or measured in current simulation and assessment platforms. The script concordance test(SCT) is used to assess clinical reasoning, but has never been applied to assessment of the cognitive aspects of operative skills. The purpose of this study is to provide evidence for validity of this novel SCT-based online assessment for LS skills.
Methods: We previously designed a video-based online SCT for laparoscopic suturing using a cognitive task analysis and expert panelists. The CTA yielded 4 LS domains: needle handling(NH), tissue handling(TH), knot tying techniques(KT) and operative ergonomics(OE). The test was then administered online using a survey platform with embedded videos of Nissen Fundoplications and Paraesophageal hernia repairs. Five-point scales with anchoring descriptors from -2 to +2 were used. Scoring was based on a modified SCT methodology. Experts were defined as surgeons and fellows with LS experience of >25 cases annually. Their SCT scores were compared to inexperienced surgeons (surgeons, fellows and residents with less than 25 LS cases annually). Validity was assessed by comparing scores of experienced and inexperienced surgeons. Cronbach’s alpha was used to assess the internal consistency of the test.
Results: The survey started off with 47 questions in each of the following domains: 13 NH, 4 TH, 20 KT and 10 OE. Thirty-seven surgeons (18 experts and 19 inexperienced surgeons) from academic and community practices across North America participated. Questions that demonstrated a large discrepancy among experts and panelists with a weighted score difference more than 40 were discarded (n= 20). One question was discarded because it received a 100% score from all participants. This yielded twenty-six remaining questions in the following domains: 8 NH, 2 TH, 11 KT and 5 OE. The test reliability level (Cronbach a) was 0.80. The mean score was 72±9% and 63±15% (p=0.02) for experts and inexperienced surgeons, respectively. The mean time to complete the test was 21 minutes.
Conclusion:This study provides validity evidence for a novel intra-operative LS assessment. The variability of responses between experts and panelists suggests that SCT may capture the clinical differences/surgeon preferences in performing LS intra-operatively. Integrating the SCT-based educational tool into a training curriculum may better prepare trainees for performing LS in the OR. It may also have a role to play in the assessment of decision-making skills for LS.
Presented at the SAGES 2017 Annual Meeting in Houston, TX.
Abstract ID: 87770
Program Number: P334
Presentation Session: iPoster Session (Non CME)
Presentation Type: Poster