{"title":"The best of both worlds: Assessing trainee progression in the era of competency based medical education","authors":"Stephen Gauthier, Rose Hatala","doi":"10.1111/medu.15390","DOIUrl":null,"url":null,"abstract":"<p>As clinical educators working in the Canadian postgraduate medical education landscape, we are often asked ‘why competency-based medical education (CBME)’? CBME promises clearer training outcomes with a more explicit assessment of these outcomes.<span><sup>1</sup></span> Ideally, this system allows for individualised attention to residents where areas to work on are readily identified. Summative decisions are made by groups (e.g. clinical competence committees [CCCs]) that decide on the entrustment and promotion of individual residents based on assessments of their performance in professional activities (e.g. entrustable professional activities [EPAs] or milestones).<span><sup>1</sup></span></p><p>In CBME, programs are attempting to implement prospective entrustment decisions while moving away from the systems of presumptive trust that were a hallmark of pre-CBME, time-based training models.<span><sup>1</sup></span> Operationalising this in a meaningful way has been fraught with difficulty. While there were problems with an over-reliance on presumptive trust and an under-reliance on objective assessment of competence, we are concerned that the current CBME implementation has swung too far towards heavily relying on assessment tools and processes that lack validity evidence to support the entrustment and promotion decisions that CCCs are trying to make.</p><p>In North America, the implementation of CBME has meant that almost every professional activity (or milestone) deemed important has been tightly tied to the completion of directly observed workplace-based assessments (WBAs) of that activity. So tightly have EPAs been bound to these WBAs that residents and educators alike use the terms interchangeably (i.e. ‘Send me an EPA’ means ‘let us complete a WBA’<span><sup>2</sup></span>). Unfortunately, this overloads supervisors and residents with assessment quotas and overwhelms CCCs with assessment data, some meaningful, some not.<span><sup>3</sup></span></p><p>Furthermore, there has been an over-emphasis on one very narrow conceptualisation of WBA as an entrustment-based tool meant to assess single encounters without considering if it is the right tool for the job. Several assessment tools exist that can be applied in the workplace (longitudinal WBA, indirect observation, multi-source feedback, etc.). For some activities, assessment outside the workplace (simulation, objective structured clinical examination [OSCE], etc.) might provide more useful information to CCCs.</p><p>Over-reliance on entrustment-based WBA, or over-reliance on WBA itself, is based on the dangerous assumption that an assessment tool with supportive validity evidence in one context is transferrable to other contexts. In this issue, Ryan et al. show how unreliable WBA can be when a single WBA tool is deployed across different contexts.<span><sup>4</sup></span></p><p>To combat the over-reliance on WBA, we argue for locally developed programmatic assessment.<span><sup>5</sup></span> Individual programs and specialties need the autonomy to develop their own programs of assessment supported by validity evidence derived within their own contexts. While WBAs may be used to assess certain activities, assessments of other activities could rely on other assessment methods. Not every professional activity worth assessing needs a specific number of narrowly conceptualised WBAs for thoughtful CCCs to make defensible decisions about entrustment and promotion.</p><p>This brings us to the core question of any system of assessment, including CBME: What decisions about our residents are we trying to make?<span><sup>6</sup></span> To identify residents in difficulty, do we have assessment tools with supportive validity evidence to identify these residents and the areas for improvement? To increase the feedback provided to residents, do we need high-volume WBA, and does WBA achieve this goal? To decide if a resident can be trusted to take on additional clinical responsibility, can the assessment system provide a reliable and holistic view of the resident's competence?</p><p>One path forward from this over-reliance on WBA is to recognise the value of presumptive trust (which grew out of years of experience with resident training and systems of practice) while leveraging the strengths of CBME in terms of clear and relevant training expectations and outcomes. A system where residents are given presumptive trust during certain stages of training with thoughtfully deployed assessments at key developmental moments would reduce the resource requirements of implementing high numbers of WBAs. Doing so in a way that works for all programs and specialties necessitates developing programmatic assessment situated in the local context. In this model, we could formalise and embrace the use of presumptive trust while adding more assessment than in the past, pausing at key moments of resident development and looking for red flags as a signal that the presumptive trust of an individual resident is not acceptable. Key to this approach would be to balance routine progression through training, grounded in a degree of presumptive trust, with a locally developed program of assessment that supports the CCC's decisions.</p><p>Using the analogy discussed in Schumacher et al.'s paper in this issue,<span><sup>7</sup></span> while it is prudent to stop the conveyor belt of training to make prospective entrustment decisions, the conveyor belt need not be stopped for every single professional activity for every resident. Thoughtfully implementing a system that incorporates presumptive trust ensures that CCCs are not overwhelmed with a conveyor belt that is constantly turning on and off and instead could effectively focus on stopping the conveyor at key decision points. As Schumacher et al.'s study highlights, this model is in part what is currently happening on the ground.<span><sup>7</sup></span></p><p>In such a model, we need local programs of assessment that are fit for purpose. Programs would start by asking themselves what problem they are addressing and what decisions they are making. Next, ask what the best data is to make these decisions and which tools will best capture that data. This approach allows a CCC to consider their own local factors, like the program's size, faculty interaction with assessment tools and how various assessment tools have worked for them in the past. Being clear about the link between the individual tools, the validity evidence supporting the program of assessment in the local context and how the CCC uses that data is key.</p><p>More data does not necessarily mean more defensible decisions. The limited time and resources available to training programs cannot be wasted on obtaining unhelpful assessment data that does not support the decisions they are trying to make. Reducing the use of WBA where it is not fit for purpose and developing locally sustainable and defensible programs of assessment are steps towards unlocking the value of CBME.</p><p><b>Stephen Gauthier</b>: Conceptualization (equal); writing—original draft (lead); writing—review and editing (equal). <b>Rose Hatala</b>: Conceptualization (equal); writing—original draft (supporting); writing—review and editing (equal).</p>","PeriodicalId":18370,"journal":{"name":"Medical Education","volume":null,"pages":null},"PeriodicalIF":4.9000,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/medu.15390","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Education","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/medu.15390","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0
Abstract
As clinical educators working in the Canadian postgraduate medical education landscape, we are often asked ‘why competency-based medical education (CBME)’? CBME promises clearer training outcomes with a more explicit assessment of these outcomes.1 Ideally, this system allows for individualised attention to residents where areas to work on are readily identified. Summative decisions are made by groups (e.g. clinical competence committees [CCCs]) that decide on the entrustment and promotion of individual residents based on assessments of their performance in professional activities (e.g. entrustable professional activities [EPAs] or milestones).1
In CBME, programs are attempting to implement prospective entrustment decisions while moving away from the systems of presumptive trust that were a hallmark of pre-CBME, time-based training models.1 Operationalising this in a meaningful way has been fraught with difficulty. While there were problems with an over-reliance on presumptive trust and an under-reliance on objective assessment of competence, we are concerned that the current CBME implementation has swung too far towards heavily relying on assessment tools and processes that lack validity evidence to support the entrustment and promotion decisions that CCCs are trying to make.
In North America, the implementation of CBME has meant that almost every professional activity (or milestone) deemed important has been tightly tied to the completion of directly observed workplace-based assessments (WBAs) of that activity. So tightly have EPAs been bound to these WBAs that residents and educators alike use the terms interchangeably (i.e. ‘Send me an EPA’ means ‘let us complete a WBA’2). Unfortunately, this overloads supervisors and residents with assessment quotas and overwhelms CCCs with assessment data, some meaningful, some not.3
Furthermore, there has been an over-emphasis on one very narrow conceptualisation of WBA as an entrustment-based tool meant to assess single encounters without considering if it is the right tool for the job. Several assessment tools exist that can be applied in the workplace (longitudinal WBA, indirect observation, multi-source feedback, etc.). For some activities, assessment outside the workplace (simulation, objective structured clinical examination [OSCE], etc.) might provide more useful information to CCCs.
Over-reliance on entrustment-based WBA, or over-reliance on WBA itself, is based on the dangerous assumption that an assessment tool with supportive validity evidence in one context is transferrable to other contexts. In this issue, Ryan et al. show how unreliable WBA can be when a single WBA tool is deployed across different contexts.4
To combat the over-reliance on WBA, we argue for locally developed programmatic assessment.5 Individual programs and specialties need the autonomy to develop their own programs of assessment supported by validity evidence derived within their own contexts. While WBAs may be used to assess certain activities, assessments of other activities could rely on other assessment methods. Not every professional activity worth assessing needs a specific number of narrowly conceptualised WBAs for thoughtful CCCs to make defensible decisions about entrustment and promotion.
This brings us to the core question of any system of assessment, including CBME: What decisions about our residents are we trying to make?6 To identify residents in difficulty, do we have assessment tools with supportive validity evidence to identify these residents and the areas for improvement? To increase the feedback provided to residents, do we need high-volume WBA, and does WBA achieve this goal? To decide if a resident can be trusted to take on additional clinical responsibility, can the assessment system provide a reliable and holistic view of the resident's competence?
One path forward from this over-reliance on WBA is to recognise the value of presumptive trust (which grew out of years of experience with resident training and systems of practice) while leveraging the strengths of CBME in terms of clear and relevant training expectations and outcomes. A system where residents are given presumptive trust during certain stages of training with thoughtfully deployed assessments at key developmental moments would reduce the resource requirements of implementing high numbers of WBAs. Doing so in a way that works for all programs and specialties necessitates developing programmatic assessment situated in the local context. In this model, we could formalise and embrace the use of presumptive trust while adding more assessment than in the past, pausing at key moments of resident development and looking for red flags as a signal that the presumptive trust of an individual resident is not acceptable. Key to this approach would be to balance routine progression through training, grounded in a degree of presumptive trust, with a locally developed program of assessment that supports the CCC's decisions.
Using the analogy discussed in Schumacher et al.'s paper in this issue,7 while it is prudent to stop the conveyor belt of training to make prospective entrustment decisions, the conveyor belt need not be stopped for every single professional activity for every resident. Thoughtfully implementing a system that incorporates presumptive trust ensures that CCCs are not overwhelmed with a conveyor belt that is constantly turning on and off and instead could effectively focus on stopping the conveyor at key decision points. As Schumacher et al.'s study highlights, this model is in part what is currently happening on the ground.7
In such a model, we need local programs of assessment that are fit for purpose. Programs would start by asking themselves what problem they are addressing and what decisions they are making. Next, ask what the best data is to make these decisions and which tools will best capture that data. This approach allows a CCC to consider their own local factors, like the program's size, faculty interaction with assessment tools and how various assessment tools have worked for them in the past. Being clear about the link between the individual tools, the validity evidence supporting the program of assessment in the local context and how the CCC uses that data is key.
More data does not necessarily mean more defensible decisions. The limited time and resources available to training programs cannot be wasted on obtaining unhelpful assessment data that does not support the decisions they are trying to make. Reducing the use of WBA where it is not fit for purpose and developing locally sustainable and defensible programs of assessment are steps towards unlocking the value of CBME.
Stephen Gauthier: Conceptualization (equal); writing—original draft (lead); writing—review and editing (equal). Rose Hatala: Conceptualization (equal); writing—original draft (supporting); writing—review and editing (equal).
期刊介绍:
Medical Education seeks to be the pre-eminent journal in the field of education for health care professionals, and publishes material of the highest quality, reflecting world wide or provocative issues and perspectives.
The journal welcomes high quality papers on all aspects of health professional education including;
-undergraduate education
-postgraduate training
-continuing professional development
-interprofessional education