Amro M Abdelrahman, MD, Denny Yu, PhD, Bethany R Lowndes, EeeLN H Buckarma, MD, Becca L Gas, David R Farley, MD, M. Susan Hallbeck, PhD, Juliane Bingener-Casey, MD. Mayo clinic
Introduction: The study goal was to validate an inverted Peg Transfer (iPT) task for surgical training assessment using Messick’s validation model. Although regular Peg Transfer (rPT) is used to assess basic laparoscopic surgery skills, rPT doesn’t expose surgical trainees to all required intra-abdominal laparoscopic situations (i.e. intra-abdominal wall during laparoscopic ventral hernia repair (LVHR)).
Methods and Procedures: A crossover-randomized design was used to compare participants’ performance during rPT and iPT in a medical simulation center. The iPT consisted of a magnetic pegboard with standard rPT pegs and triangles attached to the ceiling of a Park Trainer Box on a laparoscopic-video tower (Stryker Corp.). iPT, like rPT, is designed to assess hand-eye coordination, ambidexterity, and depth perception plus assesses skills needed to place mobile objects against gravity like for LVHR (content evidence). Participants were divided into two groups: novices (medical students and first-year surgical residents without laparoscopic experience), and experts (Minimally Invasive Surgery (MIS) attendings). Participants were asked to complete each version of peg transfer separately (6-minutes maximum). rPT was completed on a Fundamentals of Laparoscopic Surgery trainer. This was the first exposure to iPT for both novices and experts. Completion time (efficiency) and number of dropped and transferred triangles (precision) were collected. A scoring rubric was used to calculate a normalized participant score between 0 and 100, where a higher score indicated better performance (internal structure validity). Wilcoxon rank sum and Mann-Whitney tests were performed as appropriate using SPSS v22 (IBM Corp.) with α=0.05. Receiver-Operating-Characteristic Curves were graphed for the two task scores to measure the Area Under the Curve (AUC) to identify tasks’ sensitivity and specificity in differentiating between novices and experts.
Results: Thirty-six novices and nine experts participated. Both experts and novices had significantly longer completion time and lower scores during iPT than rPT (Table 1). Within iPT, novices averaged 158 seconds longer (p=0.047), and 36-point lower scores than experts (p<0.01). However, there were no statistically significant differences between novices and experts in either completion time (117 sec novices, 187 sec experts, p=0.24) nor scores (novices=73, experts=81, p=0.12) for rPT. The iPT scores had a higher AUC than the rPT (iPT= 0.92; rPT= 0.67).
Conclusion: IPT is a valid method of assessing surgical trainees and has higher specificity and sensitivity than rPT for differentiating between novices and experts. As advanced MIS becomes more common, it is important that iPT be included in surgical simulation-based training and assessment curriculums.