{"title":"好 \"评估:方法的多样性可能是女皇,但合理的方法仍然是女王","authors":"Betty Onyura","doi":"10.1111/medu.15374","DOIUrl":null,"url":null,"abstract":"<p>The appetite for evaluation of health professions education (HPE) programmes appears insatiable. Mandates to evaluate prevail and administrators persist in their quests for data to inform decision-making. Understandably, people want to know if their efforts ‘move the needle’ for today's learners and tomorrow's clinicians. Regulatory bodies want assurances that accreditation standards are met. And, communities deserve evidence-grounded accountability from those charged with training clinician workforces. Interestingly, however, the unabating appetite for evaluation exists in contrast to relatively modest scholarship or capacity-building on evaluation across HPE. Thus, it is encouraging to see contributions such as the one from Rees et al.<span><sup>1</sup></span> in this issue of <i>Medical Education</i>.</p><p>Rees et al.<span><sup>1</sup></span> present a comprehensive overview of the realist interview method. Following critical analysis of select realist evaluation studies, they outline best practice recommendations for applying realist interview techniques to realist evaluation. This work is timely, given the increasing popularity of realist evaluation across HPE, which may be a promising indicator of methodological expansion in the field's evaluation practices. Indeed, several scholars have critiqued the field's seeming preoccupation with simple outcome verification as the ‘go-to’ evaluation strategy.<span><sup>2-6</sup></span> Figuratively, the crux of this critique has been that when it comes to evaluation, HPE has focused on the trees—familiar, uniform trees—while neglecting the diversity of the forest. Indeed, I contend that embracing methodological diversity is the essence of ‘good’ evaluation. Not only does it enable the exploration of novel lines of inquiry, it also facilitates needed reflection on or redirection of how we are using evaluation to assign value (see Gates<span><sup>7</sup></span>). Conversely, asking similar questions of similar objects in routinised ways cannot satiate the appetite for new insights or innovative, evidence-informed, shifts away from a sub-optimal status quo across our academic healthcare systems.<span><sup>8-10</sup></span></p><p>As such, Rees et al.<span><sup>1</sup></span> present a strong value proposition for how realist evaluation expands possibilities for evaluative inquiry. Using examples, they illustrate the many avenues that emerge when a realist approach is used to explore how programme contexts may variably trigger change mechanisms and influence differences in outcomes. However, championing the merits of methodological diversity in evaluation is incidental to their aims. Rather, their core argument is best summarised as follows: The uptake of contemporary evaluation methodologies is of limited value if it is not accompanied by proper implementation of associated methods. Simply put, evaluators must adopt sound methods with fidelity to their espoused methodologies.</p><p>Additional points stand out.</p><p>One, not all interview methods are equal. Though interviews are a common data collection method, there are unique features of interview techniques across different research traditions and evaluation methodologies. For example, Rees et al.<span><sup>1</sup></span> explain how realist interviewing differs from interviewing within certain qualitative traditions where the focus is on eliciting participants' experiences on a given topic. In contrast, realist interview techniques are designed to elicit information to support theory building and testing of assumptions about programme functioning.</p><p>Two, realist interviewing requires balancing the tension of being both teacher and learner in the service of credible theory building. Rees et al.<span><sup>1</sup></span> describe the ‘teacher–learner cycle’ as a core tenet of realist interviewing. Realist interviewers must hold knowledge about both the intervention and about theoretical linkages that feasibly explain its functioning. Interviewers must share some of this insight with interviewees, while maintaining the flexibility and humility required to challenge their assumptions and refine theories in response to insights from interviewees' experiential knowledge.</p><p>Three, a diverse sampling frame across those who are conversant with the programme is desirable. Rees et al.<span><sup>1</sup></span> contend that whenever there is an indication that a programme may function differently for different stakeholder groups or across different settings, then it is particularly important that realist interviewers ensure participant diversity. At first glance, this guidance seems spot-on. Evaluators need to recruit a diverse participant pool by sampling those who have varied insights about programmes. Moreover, it is arguably an ethical imperative to recruit participants who would just as readily discuss ‘why the programme doesn't work’ and ‘how the programme could fail’ as they would share ‘how it works and for whom’. Upon a closer look, however, the sampling guidance surfaces a notable limitation of realist interviewing.</p><p>Qualifying for participation in the ways the authors' outline requires that the ideal target participant meet multiple criteria. These seemingly include (i) significant literacy about the programme, (ii) capacity to verbally explain how potentially nuanced experiences may trigger one outcome versus another, (iii) cognitive capacity to understand and accurately interpret interviewers' assumptions about programme functioning, (iv) availability and willingness to support refinement of potentially abstract theories about a complex intervention and (v) ability to identify realist interviewers' potentially erroneous assumptions about programme functioning, <i>plus</i> (vi) the confidence to challenge such assumptions in real-time. Alas! Realist interview method's appeals for participant diversity appear tough to reconcile with its seeming demand for erudite or otherwise socially assured participants with the gift of both time and articulation. Such sampling requirements could lead to equity gaps, inadvertently privileging certain perspectives over others. Indeed, this issue sorely highlights the necessity of their argument for improving evaluation reporting.</p><p>Following a review of select realist studies, Rees et al.<span><sup>1</sup></span> spotlight transparency in evaluation reporting—as both a gap—and best practice imperative. They note that common transparency gaps include scholars' failures to report sampling approaches or interview techniques. This message must not be discounted. Embracing methodological diversity is necessary but insufficient for high-quality evaluation. Essentially, it is inadequate to simply value evaluation methodology as a gauge that indicates whether our ‘needle’ is shifting. We must also take on the responsibility to monitor the quality and operations of the gauge itself. Rees et al. help us do that, by instructing us on how to ‘operate’ the realist interview method. Perhaps, next, scholars can explore how complementary methods can mitigate the approach's limitations.</p><p><b>Betty Onyura:</b> Conceptualization; writing—review and editing; writing—original draft.</p>","PeriodicalId":18370,"journal":{"name":"Medical Education","volume":null,"pages":null},"PeriodicalIF":4.9000,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/medu.15374","citationCount":"0","resultStr":"{\"title\":\"‘Good’ evaluation: Methodological diversity may be empress, but sound methods remain queen\",\"authors\":\"Betty Onyura\",\"doi\":\"10.1111/medu.15374\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The appetite for evaluation of health professions education (HPE) programmes appears insatiable. Mandates to evaluate prevail and administrators persist in their quests for data to inform decision-making. Understandably, people want to know if their efforts ‘move the needle’ for today's learners and tomorrow's clinicians. Regulatory bodies want assurances that accreditation standards are met. And, communities deserve evidence-grounded accountability from those charged with training clinician workforces. Interestingly, however, the unabating appetite for evaluation exists in contrast to relatively modest scholarship or capacity-building on evaluation across HPE. Thus, it is encouraging to see contributions such as the one from Rees et al.<span><sup>1</sup></span> in this issue of <i>Medical Education</i>.</p><p>Rees et al.<span><sup>1</sup></span> present a comprehensive overview of the realist interview method. Following critical analysis of select realist evaluation studies, they outline best practice recommendations for applying realist interview techniques to realist evaluation. This work is timely, given the increasing popularity of realist evaluation across HPE, which may be a promising indicator of methodological expansion in the field's evaluation practices. Indeed, several scholars have critiqued the field's seeming preoccupation with simple outcome verification as the ‘go-to’ evaluation strategy.<span><sup>2-6</sup></span> Figuratively, the crux of this critique has been that when it comes to evaluation, HPE has focused on the trees—familiar, uniform trees—while neglecting the diversity of the forest. Indeed, I contend that embracing methodological diversity is the essence of ‘good’ evaluation. Not only does it enable the exploration of novel lines of inquiry, it also facilitates needed reflection on or redirection of how we are using evaluation to assign value (see Gates<span><sup>7</sup></span>). Conversely, asking similar questions of similar objects in routinised ways cannot satiate the appetite for new insights or innovative, evidence-informed, shifts away from a sub-optimal status quo across our academic healthcare systems.<span><sup>8-10</sup></span></p><p>As such, Rees et al.<span><sup>1</sup></span> present a strong value proposition for how realist evaluation expands possibilities for evaluative inquiry. Using examples, they illustrate the many avenues that emerge when a realist approach is used to explore how programme contexts may variably trigger change mechanisms and influence differences in outcomes. However, championing the merits of methodological diversity in evaluation is incidental to their aims. Rather, their core argument is best summarised as follows: The uptake of contemporary evaluation methodologies is of limited value if it is not accompanied by proper implementation of associated methods. Simply put, evaluators must adopt sound methods with fidelity to their espoused methodologies.</p><p>Additional points stand out.</p><p>One, not all interview methods are equal. Though interviews are a common data collection method, there are unique features of interview techniques across different research traditions and evaluation methodologies. For example, Rees et al.<span><sup>1</sup></span> explain how realist interviewing differs from interviewing within certain qualitative traditions where the focus is on eliciting participants' experiences on a given topic. In contrast, realist interview techniques are designed to elicit information to support theory building and testing of assumptions about programme functioning.</p><p>Two, realist interviewing requires balancing the tension of being both teacher and learner in the service of credible theory building. Rees et al.<span><sup>1</sup></span> describe the ‘teacher–learner cycle’ as a core tenet of realist interviewing. Realist interviewers must hold knowledge about both the intervention and about theoretical linkages that feasibly explain its functioning. Interviewers must share some of this insight with interviewees, while maintaining the flexibility and humility required to challenge their assumptions and refine theories in response to insights from interviewees' experiential knowledge.</p><p>Three, a diverse sampling frame across those who are conversant with the programme is desirable. Rees et al.<span><sup>1</sup></span> contend that whenever there is an indication that a programme may function differently for different stakeholder groups or across different settings, then it is particularly important that realist interviewers ensure participant diversity. At first glance, this guidance seems spot-on. Evaluators need to recruit a diverse participant pool by sampling those who have varied insights about programmes. Moreover, it is arguably an ethical imperative to recruit participants who would just as readily discuss ‘why the programme doesn't work’ and ‘how the programme could fail’ as they would share ‘how it works and for whom’. Upon a closer look, however, the sampling guidance surfaces a notable limitation of realist interviewing.</p><p>Qualifying for participation in the ways the authors' outline requires that the ideal target participant meet multiple criteria. These seemingly include (i) significant literacy about the programme, (ii) capacity to verbally explain how potentially nuanced experiences may trigger one outcome versus another, (iii) cognitive capacity to understand and accurately interpret interviewers' assumptions about programme functioning, (iv) availability and willingness to support refinement of potentially abstract theories about a complex intervention and (v) ability to identify realist interviewers' potentially erroneous assumptions about programme functioning, <i>plus</i> (vi) the confidence to challenge such assumptions in real-time. Alas! Realist interview method's appeals for participant diversity appear tough to reconcile with its seeming demand for erudite or otherwise socially assured participants with the gift of both time and articulation. Such sampling requirements could lead to equity gaps, inadvertently privileging certain perspectives over others. Indeed, this issue sorely highlights the necessity of their argument for improving evaluation reporting.</p><p>Following a review of select realist studies, Rees et al.<span><sup>1</sup></span> spotlight transparency in evaluation reporting—as both a gap—and best practice imperative. They note that common transparency gaps include scholars' failures to report sampling approaches or interview techniques. This message must not be discounted. Embracing methodological diversity is necessary but insufficient for high-quality evaluation. Essentially, it is inadequate to simply value evaluation methodology as a gauge that indicates whether our ‘needle’ is shifting. We must also take on the responsibility to monitor the quality and operations of the gauge itself. Rees et al. help us do that, by instructing us on how to ‘operate’ the realist interview method. Perhaps, next, scholars can explore how complementary methods can mitigate the approach's limitations.</p><p><b>Betty Onyura:</b> Conceptualization; writing—review and editing; writing—original draft.</p>\",\"PeriodicalId\":18370,\"journal\":{\"name\":\"Medical Education\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/medu.15374\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/medu.15374\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Education","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/medu.15374","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
‘Good’ evaluation: Methodological diversity may be empress, but sound methods remain queen
The appetite for evaluation of health professions education (HPE) programmes appears insatiable. Mandates to evaluate prevail and administrators persist in their quests for data to inform decision-making. Understandably, people want to know if their efforts ‘move the needle’ for today's learners and tomorrow's clinicians. Regulatory bodies want assurances that accreditation standards are met. And, communities deserve evidence-grounded accountability from those charged with training clinician workforces. Interestingly, however, the unabating appetite for evaluation exists in contrast to relatively modest scholarship or capacity-building on evaluation across HPE. Thus, it is encouraging to see contributions such as the one from Rees et al.1 in this issue of Medical Education.
Rees et al.1 present a comprehensive overview of the realist interview method. Following critical analysis of select realist evaluation studies, they outline best practice recommendations for applying realist interview techniques to realist evaluation. This work is timely, given the increasing popularity of realist evaluation across HPE, which may be a promising indicator of methodological expansion in the field's evaluation practices. Indeed, several scholars have critiqued the field's seeming preoccupation with simple outcome verification as the ‘go-to’ evaluation strategy.2-6 Figuratively, the crux of this critique has been that when it comes to evaluation, HPE has focused on the trees—familiar, uniform trees—while neglecting the diversity of the forest. Indeed, I contend that embracing methodological diversity is the essence of ‘good’ evaluation. Not only does it enable the exploration of novel lines of inquiry, it also facilitates needed reflection on or redirection of how we are using evaluation to assign value (see Gates7). Conversely, asking similar questions of similar objects in routinised ways cannot satiate the appetite for new insights or innovative, evidence-informed, shifts away from a sub-optimal status quo across our academic healthcare systems.8-10
As such, Rees et al.1 present a strong value proposition for how realist evaluation expands possibilities for evaluative inquiry. Using examples, they illustrate the many avenues that emerge when a realist approach is used to explore how programme contexts may variably trigger change mechanisms and influence differences in outcomes. However, championing the merits of methodological diversity in evaluation is incidental to their aims. Rather, their core argument is best summarised as follows: The uptake of contemporary evaluation methodologies is of limited value if it is not accompanied by proper implementation of associated methods. Simply put, evaluators must adopt sound methods with fidelity to their espoused methodologies.
Additional points stand out.
One, not all interview methods are equal. Though interviews are a common data collection method, there are unique features of interview techniques across different research traditions and evaluation methodologies. For example, Rees et al.1 explain how realist interviewing differs from interviewing within certain qualitative traditions where the focus is on eliciting participants' experiences on a given topic. In contrast, realist interview techniques are designed to elicit information to support theory building and testing of assumptions about programme functioning.
Two, realist interviewing requires balancing the tension of being both teacher and learner in the service of credible theory building. Rees et al.1 describe the ‘teacher–learner cycle’ as a core tenet of realist interviewing. Realist interviewers must hold knowledge about both the intervention and about theoretical linkages that feasibly explain its functioning. Interviewers must share some of this insight with interviewees, while maintaining the flexibility and humility required to challenge their assumptions and refine theories in response to insights from interviewees' experiential knowledge.
Three, a diverse sampling frame across those who are conversant with the programme is desirable. Rees et al.1 contend that whenever there is an indication that a programme may function differently for different stakeholder groups or across different settings, then it is particularly important that realist interviewers ensure participant diversity. At first glance, this guidance seems spot-on. Evaluators need to recruit a diverse participant pool by sampling those who have varied insights about programmes. Moreover, it is arguably an ethical imperative to recruit participants who would just as readily discuss ‘why the programme doesn't work’ and ‘how the programme could fail’ as they would share ‘how it works and for whom’. Upon a closer look, however, the sampling guidance surfaces a notable limitation of realist interviewing.
Qualifying for participation in the ways the authors' outline requires that the ideal target participant meet multiple criteria. These seemingly include (i) significant literacy about the programme, (ii) capacity to verbally explain how potentially nuanced experiences may trigger one outcome versus another, (iii) cognitive capacity to understand and accurately interpret interviewers' assumptions about programme functioning, (iv) availability and willingness to support refinement of potentially abstract theories about a complex intervention and (v) ability to identify realist interviewers' potentially erroneous assumptions about programme functioning, plus (vi) the confidence to challenge such assumptions in real-time. Alas! Realist interview method's appeals for participant diversity appear tough to reconcile with its seeming demand for erudite or otherwise socially assured participants with the gift of both time and articulation. Such sampling requirements could lead to equity gaps, inadvertently privileging certain perspectives over others. Indeed, this issue sorely highlights the necessity of their argument for improving evaluation reporting.
Following a review of select realist studies, Rees et al.1 spotlight transparency in evaluation reporting—as both a gap—and best practice imperative. They note that common transparency gaps include scholars' failures to report sampling approaches or interview techniques. This message must not be discounted. Embracing methodological diversity is necessary but insufficient for high-quality evaluation. Essentially, it is inadequate to simply value evaluation methodology as a gauge that indicates whether our ‘needle’ is shifting. We must also take on the responsibility to monitor the quality and operations of the gauge itself. Rees et al. help us do that, by instructing us on how to ‘operate’ the realist interview method. Perhaps, next, scholars can explore how complementary methods can mitigate the approach's limitations.
Betty Onyura: Conceptualization; writing—review and editing; writing—original draft.
期刊介绍:
Medical Education seeks to be the pre-eminent journal in the field of education for health care professionals, and publishes material of the highest quality, reflecting world wide or provocative issues and perspectives.
The journal welcomes high quality papers on all aspects of health professional education including;
-undergraduate education
-postgraduate training
-continuing professional development
-interprofessional education