Determining the ‘STTARS’ among teachers of advanced laparoscopic surgery

Susannah M Wyles, MD, Danilo Miskovic, MD, FRCS, Melody Ni, PhD, Nader Francis, PhD, FRCS, Mark G Coleman, MD, FRCS, George B Hanna, PhD, FRCS

Imperial College London, Derriford Hospital (on behalf of the National Training Program)

Within medicine there is an expectation that you will teach and provide an apprenticeship. With a reduction in both time available and the ethical acceptability of “practice” on patients, coupled with an increase in pressure for trainers to deliver service, the challenge to provide quality training is even greater than before, particularly for the more complex surgical procedures. There is currently no method to quantify the quality of such a teaching episode. Using those involved within the National Training Programme for laparoscopic colorectal surgery (NTP), the aim was to devise, validate and implement a training assessment tool to provide objective feedback.

Methods and procedures:
To obtain opinion regarding training structure in LCS, feedback and the characteristics that make a good trainer, semi-structured interviews were performed and transcribed by two researchers. These were analysed to determine items, lists were created and item importance was rated through a Delphi process using a 7-point Likert scale (1=strongly disagree, 7=strongly agree). Items scoring >6 were extracted, and collated into an assessment form, the design and content of which was determined by an expert focus group. This was piloted and internal consistency was determined by Cronbach’s α, and inter-rater reliability by interclass correlation coefficient, ICC (significance= p<0.05). Since there was no gold standard, construct and criterion validity could not be assessed.

43 interviews were performed (29 surgical trainers, 10 trainees, 4 educationalists), with a mean length 17min(range 6-42min), and analysis and item creation showed excellent inter-rater agreement (Cohen’s κ=0.92). Lists pertaining to trainer characteristics, training structure and feedback (total 188 items), were distributed to 11 trainers and 7 trainees from different NTP training centres, and consensus was reached after two rounds of the Delphi process. Focus group and piloting created an assessment form (23 iterations), that could be completed by an observer in real-time during a training episode. The Structured Training Trainer Assessment Report (STTAR) consisted of 64 essential factors, separated into 4 groups (training structure, behaviour, trainer attributes and role-modelling), organised around a training session timeline set”:(beginning of the case”, “dialogue” (during the case) and “closure” (feedback). Pilot-testing (6 trainers, 48 different assessments), demonstrated good face and content validity, internal consistency (α=0.88), inter-rater realiability (ICC=0.75), and the STTAR was accepted as feasible and useful [median score 4 of 5 (IQR 1-5)]. As expected given the select group, there was no significant difference in overall trainer scores (mean21.09(max28)(SD1.06),p=0.138), although all trainers scored lower in items within closure demonstrating an area requiring improvement.

An educational assessment tool that is reliable, valid, and uniquely comprehensive in evaluating the surgical training episode, has been designed and implemented into the NTP. It is able to quantify the teaching process in its entirety, at a national level with consultant trainers and highlight areas for improvement. Its transferability to other medical specialties should be investigated since it could provide a cost-saving, easy, constructive way of ensuring that every training episode is as effective as possible.

Session: Poster Presentation

Program Number: P142

« Return to SAGES 2013 abstract archive