Résumés
Abstract
Background: Competence by design (CBD) residency programs increasingly depend on tools that provide reliable assessments, require minimal rater training, and measure progression through the CBD milestones. To assess intraoperative skills, global rating scales and entrustability ratings are commonly used but may require extensive training. The Competency Continuum (CC) is a CBD framework that may be used as an assessment tool to assess laparoscopic skills. The study aimed to compare the CC to two other assessment tools: the Global Operative Assessment of Laparoscopic Skills (GOALS) and the Zwisch scale.
Methods: Four expert surgeons rated thirty laparoscopic cholecystectomy videos. Two raters used the GOALS scale while the remaining two raters used both the Zwisch scale and CC. Each rater received scale-specific training. Descriptive statistics, inter-rater reliabilities (IRR), and Pearson’s correlations were calculated for each scale.
Results: Significant positive correlations between GOALS and Zwisch (r = 0.75, p < 0.001), CC and GOALS (r = 0.79, p < 0.001), and CC and Zwisch (r = 0.90, p < 0.001) were found. The CC had an inter-rater reliability of 0.74 whereas the GOALS and Zwisch scales had inter-rater reliabilities of 0.44 and 0.43, respectively. Compared to GOALS and Zwisch scales, the CC had the highest inter-rater reliability and required minimal rater training to achieve reliable scores.
Conclusion: The CC may be a reliable tool to assess intraoperative laparoscopic skills and provide trainees with formative feedback relevant to the CBD milestones. Further research should collect further validity evidence for the use of the CC as an independent assessment tool.
Résumé
Contexte : Les programmes de résidence structurés autour de la compétence par conception (CPC) dépendent de plus en plus d’outils qui fournissent des évaluations fiables, nécessitent une formation minimale des évaluateurs et mesurent la progression dans les étapes de la CPC. Pour évaluer les compétences peropératoires, les échelles d’évaluation globale et de confiance sont couramment utilisées mais peuvent nécessiter une formation approfondie. Le Continuum des compétences (CC) est un cadre de la CPC qui peut être utilisé comme outil d’évaluation des compétences laparoscopiques. L’étude visait à comparer le CC à deux autres outils d’évaluation : l’évaluation globale opératoire des compétences laparoscopiques (GOALS) et l’échelle de Zwisch.
Méthodes : Quatre chirurgiens experts ont évalué trente vidéos de cholécystectomie laparoscopique. Deux évaluateurs ont utilisé l’échelle GOALS tandis que les deux autres ont utilisé l’échelle Zwisch et le CC. Chacun d’eux avait reçu une formation spécifique à l’échelle utilisée. Des statistiques descriptives, la fiabilité inter-évaluateurs (FIÉ) et des corrélations de Pearson ont été calculées pour chaque échelle.
Résultats : Des corrélations positives significatives ont été trouvées entre les échelles GOALS et Zwisch (r=0.75, p<0.001), CC et GOALS (r=0.79, p<0.001), et CC et Zwisch (r=0.90, p<0.001). Le CC avait une fiabilité inter-évaluateurs de 0,74 tandis que les échelles GOALS et Zwisch avaient des fiabilités inter-évaluateurs de 0,44 et 0,43, respectivement. Par rapport aux échelles GOALS et Zwisch, le CC avait la fiabilité inter-évaluateurs la plus élevée et ne nécessitait qu’une formation minimale des évaluateurs pour obtenir des scores fiables.
Conclusion : Le CC constituerait un outil fiable pour évaluer les compétences laparoscopiques peropératoires et pour fournir aux stagiaires une rétroaction formatrice pertinente pour les étapes de la CPC. Des recherches supplémentaires devraient être entreprises pour recueillir plus de preuves de validité pour l’utilisation du CC comme outil d’évaluation indépendant.
Veuillez télécharger l’article en PDF pour le lire.
Télécharger
Parties annexes
Bibliography
- Royal College of Physicians and Surgeons of Canada. CBD start, launch and exam schedule. http://www.royalcollege.ca/rcsite/documents/cbd-community-touchpoint/cbd-rollout-schedule-e. [Accessed Oct 3, 2020].
- University of Toronto Department of Medicine. Competency based medical education. https://www.deptmedicine.utoronto.ca/competency-based-medical-education . [Accessed Oct 3, 2020].
- de Montbrun S, Satterthwaite L, Grantcharov TP. Setting pass scores for assessment of technical performance by surgical trainees. Br J Surg. 2016;103(3):300-306. https://doi.org/10.1002/bjs.10047
- Epstein, Ronald M., Cox, Malcolm, Irby DM. Assessment in medical education. NEJM. 2007;100(2):387-396. https://doi.org/10.1056/nejmra054784
- Rudolph JW, Simon R, Raemer DB, Eppich WJ. Debriefing as formative assessment: Closing performance gaps in medical education. Acad Emerg Med. 2008;15(11):1010-1016. https://doi.org/10.1111/j.1553-2712.2008.00248.x
- Middleton RM, Baldwin MJ, Akhtar K, Alvand A, Rees JL. Which Global Rating Scale? J Bone Jt Surg. 2016;98(1):75-81. https://doi.org/10.2106/JBJS.O.00434
- Kramp KH, Van Det MJ, Hoff C, Lamme B, Veeger NJGM, Pierie JPEN. Validity and reliability of Global Operative Assessment of Laparoscopic Skills (GOALS) in novice trainees performing a laparoscopic cholecystectomy. J Surg Educ. 2015;72(2):351-358. https://doi.org/10.1016/j.jsurg.2014.08.006
- Gumbs AA, Hogle NJ, Fowler DL. Evaluation of resident laparoscopic performance using global operative assessment of laparoscopic skills. J Am Coll Surg. 2007;204(2):308-313. https://doi.org/10.1016/j.jamcollsurg.2006.11.010
- Vassiliou MC, Feldman LS, Andrew CG, et al. A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg. 2005;190(1):107-113. https://doi.org/10.1016/j.amjsurg.2005.04.004
- Williams RG, Sanfey H, Chen XP, Dunnington GL. A controlled study to determine measurement conditions necessary for a reliable and valid operative performance assessment: a controlled prospective observational study. Ann Surg. 2012;256(1):177-187. https://doi.org/10.1097/SLA.0b013e31825b6de4
- YouTube. March 27 2015 CBD Webinar EPA Milestones. https://www.youtube.com/watch?v=CScsSywOaAU. [Accessed Oct 3, 2020].
- The Royal College of Physicians and Surgeons of Canada. EPAs and CanMEDS milestones. http://www.royalcollege.ca/rcsite/cbd/implementation/cbd-milestones-epas-e. [Accessed Oct 3, 2020].
- Darosa DA, Zwischenberger JB, Meyerson SL, et al. A theory-based model for teaching and assessing residents in the operating room. J Surg Educ. 2013;70(1):24-30. https://doi.org/10.1016/j.jsurg.2012.07.007
- George BC, Teitelbaum EN, Meyerson SL, et al. Reliability, validity, and feasibility of the zwisch scale for the assessment of intraoperative performance. J Surg Educ.. Vol 71. Elsevier Inc.; 2014:e90-e96. https://doi.org/10.1016/j.jsurg.2014.06.018
- Royal College of Physicians and Surgeons of Canada. CBD competence continuum diagram. 2015;(June):2015. https://www.royalcollege.ca/rcsite/documents/cbd/cbd-competence-continuum-diagram-legal-e.pdf. [Accessed on May 17, 2022].
- Al-Moteri M. Entrustable professional activities in nursing: a concept analysis. Int J Nurs Sci. 2020;7(3):277-284. https://doi.org/10.1016/j.ijnss.2020.06.009
- Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010;32(8):676-682. https://doi.org/10.3109/0142159X.2010.500704
- Research Guides at University of Southern California. Types of research designs - organizing your social sciences research paper https://libguides.usc.edu/writingguide/researchdesigns. [Accessed Mar 11, 2022].
- Gawad N, Fowler A, Mimeault R, Raiche I. The inter-rater reliability of technical skills assessment and retention of rater training. J Surg Educ. 2019;76(4):1088-1093. https://doi.org/10.1016/j.jsurg.2019.01.001
- Ilgen JS, Ma IWY, Hatala R, Cook DA. A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment. Med Educ. 2015;49(2):161-173. https://doi.org/10.1111/medu.12621
- Rekman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability scales: outlining their usefulness for competency-based clinical assessment. Acad Med. 2016;91(2):186-190. https://doi.org/10.1097/ACM.0000000000001045
- Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15(2):155-163. https://doi.org/10.1016/J.JCM.2016.02.012
- Touchie C, Ten Cate O. The promise, perils, problems and progress of competency-based medical education. Med Educ. 2016;50(1):93-100. https://doi.org/10.1111/medu.12839
- Crossley J, Johnson G, Booth J, Wade W. Good questions, good answers: Construct alignment improves the performance of workplace-based assessment scales. Med Educ. 2011;45(6):560-569. https://doi.org/10.1111/j.1365-2923.2010.03913.x
- Weller JM, Castanelli DJ, Chen Y, Jolly B. Making robust assessments of specialist trainees’ workplace performance. Br J Anaesth. 2017;118(2):207-214. https://doi.org/10.1093/bja/aew412
- Gawad N, Fowler A, Mimeault R, Raiche I. The inter-rater reliability of technical skills assessment and retention of rater training. J Surg Educ. 2019;76(4):1088-1093. https://doi.org/10.1016/j.jsurg.2019.01.001
- Gallagher AG, Ritter EM, Satava RM. Fundamental principles of validation, and reliability: rigorous science for the assessment of surgical education and training. Surg Endosc Other Interv Tech. 2003;17(10):1525-1529. https://doi.org/10.1007/S00464-003-0035-4/TABLES/2
- Bilgic E, Watanabe Y, McKendy K, et al. Reliable assessment of operative performance. Am J Surg. 2016;211(2):426-430. https://doi.org/10.1016/j.amjsurg.2015.10.008
- Tekian A, Watling CJ, Roberts TE, Steinert Y, Norcini J. Qualitative and quantitative feedback in the context of competency-based education. Med Teach. 2017;39(12):1245-1249. https://doi.org/10.1080/0142159X.2017.1372564
- Silber CG, Nasca TJ, Paskin DL, Eiger G, Robeson M, Veloski JJ. Do global rating forms enable program directors to assess the ACGME competencies? Acad Med. 2004;79(6):549-556. https://doi.org/10.1097/00001888-200406000-00010
- Anderson PAM. Giving feedback on clinical skills: are we starving our young? https://doi.org/10.4300/JGME-D-11-000295.1