好 "评估:方法的多样性可能是女皇,但合理的方法仍然是女王

IF 4.9 1区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES
Betty Onyura
{"title":"好 \"评估:方法的多样性可能是女皇,但合理的方法仍然是女王","authors":"Betty Onyura","doi":"10.1111/medu.15374","DOIUrl":null,"url":null,"abstract":"<p>The appetite for evaluation of health professions education (HPE) programmes appears insatiable. Mandates to evaluate prevail and administrators persist in their quests for data to inform decision-making. Understandably, people want to know if their efforts ‘move the needle’ for today's learners and tomorrow's clinicians. Regulatory bodies want assurances that accreditation standards are met. And, communities deserve evidence-grounded accountability from those charged with training clinician workforces. Interestingly, however, the unabating appetite for evaluation exists in contrast to relatively modest scholarship or capacity-building on evaluation across HPE. Thus, it is encouraging to see contributions such as the one from Rees et al.<span><sup>1</sup></span> in this issue of <i>Medical Education</i>.</p><p>Rees et al.<span><sup>1</sup></span> present a comprehensive overview of the realist interview method. Following critical analysis of select realist evaluation studies, they outline best practice recommendations for applying realist interview techniques to realist evaluation. This work is timely, given the increasing popularity of realist evaluation across HPE, which may be a promising indicator of methodological expansion in the field's evaluation practices. Indeed, several scholars have critiqued the field's seeming preoccupation with simple outcome verification as the ‘go-to’ evaluation strategy.<span><sup>2-6</sup></span> Figuratively, the crux of this critique has been that when it comes to evaluation, HPE has focused on the trees—familiar, uniform trees—while neglecting the diversity of the forest. Indeed, I contend that embracing methodological diversity is the essence of ‘good’ evaluation. Not only does it enable the exploration of novel lines of inquiry, it also facilitates needed reflection on or redirection of how we are using evaluation to assign value (see Gates<span><sup>7</sup></span>). Conversely, asking similar questions of similar objects in routinised ways cannot satiate the appetite for new insights or innovative, evidence-informed, shifts away from a sub-optimal status quo across our academic healthcare systems.<span><sup>8-10</sup></span></p><p>As such, Rees et al.<span><sup>1</sup></span> present a strong value proposition for how realist evaluation expands possibilities for evaluative inquiry. Using examples, they illustrate the many avenues that emerge when a realist approach is used to explore how programme contexts may variably trigger change mechanisms and influence differences in outcomes. However, championing the merits of methodological diversity in evaluation is incidental to their aims. Rather, their core argument is best summarised as follows: The uptake of contemporary evaluation methodologies is of limited value if it is not accompanied by proper implementation of associated methods. Simply put, evaluators must adopt sound methods with fidelity to their espoused methodologies.</p><p>Additional points stand out.</p><p>One, not all interview methods are equal. Though interviews are a common data collection method, there are unique features of interview techniques across different research traditions and evaluation methodologies. For example, Rees et al.<span><sup>1</sup></span> explain how realist interviewing differs from interviewing within certain qualitative traditions where the focus is on eliciting participants' experiences on a given topic. In contrast, realist interview techniques are designed to elicit information to support theory building and testing of assumptions about programme functioning.</p><p>Two, realist interviewing requires balancing the tension of being both teacher and learner in the service of credible theory building. Rees et al.<span><sup>1</sup></span> describe the ‘teacher–learner cycle’ as a core tenet of realist interviewing. Realist interviewers must hold knowledge about both the intervention and about theoretical linkages that feasibly explain its functioning. Interviewers must share some of this insight with interviewees, while maintaining the flexibility and humility required to challenge their assumptions and refine theories in response to insights from interviewees' experiential knowledge.</p><p>Three, a diverse sampling frame across those who are conversant with the programme is desirable. Rees et al.<span><sup>1</sup></span> contend that whenever there is an indication that a programme may function differently for different stakeholder groups or across different settings, then it is particularly important that realist interviewers ensure participant diversity. At first glance, this guidance seems spot-on. Evaluators need to recruit a diverse participant pool by sampling those who have varied insights about programmes. Moreover, it is arguably an ethical imperative to recruit participants who would just as readily discuss ‘why the programme doesn't work’ and ‘how the programme could fail’ as they would share ‘how it works and for whom’. Upon a closer look, however, the sampling guidance surfaces a notable limitation of realist interviewing.</p><p>Qualifying for participation in the ways the authors' outline requires that the ideal target participant meet multiple criteria. These seemingly include (i) significant literacy about the programme, (ii) capacity to verbally explain how potentially nuanced experiences may trigger one outcome versus another, (iii) cognitive capacity to understand and accurately interpret interviewers' assumptions about programme functioning, (iv) availability and willingness to support refinement of potentially abstract theories about a complex intervention and (v) ability to identify realist interviewers' potentially erroneous assumptions about programme functioning, <i>plus</i> (vi) the confidence to challenge such assumptions in real-time. Alas! Realist interview method's appeals for participant diversity appear tough to reconcile with its seeming demand for erudite or otherwise socially assured participants with the gift of both time and articulation. Such sampling requirements could lead to equity gaps, inadvertently privileging certain perspectives over others. Indeed, this issue sorely highlights the necessity of their argument for improving evaluation reporting.</p><p>Following a review of select realist studies, Rees et al.<span><sup>1</sup></span> spotlight transparency in evaluation reporting—as both a gap—and best practice imperative. They note that common transparency gaps include scholars' failures to report sampling approaches or interview techniques. This message must not be discounted. Embracing methodological diversity is necessary but insufficient for high-quality evaluation. Essentially, it is inadequate to simply value evaluation methodology as a gauge that indicates whether our ‘needle’ is shifting. We must also take on the responsibility to monitor the quality and operations of the gauge itself. Rees et al. help us do that, by instructing us on how to ‘operate’ the realist interview method. Perhaps, next, scholars can explore how complementary methods can mitigate the approach's limitations.</p><p><b>Betty Onyura:</b> Conceptualization; writing—review and editing; writing—original draft.</p>","PeriodicalId":18370,"journal":{"name":"Medical Education","volume":null,"pages":null},"PeriodicalIF":4.9000,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/medu.15374","citationCount":"0","resultStr":"{\"title\":\"‘Good’ evaluation: Methodological diversity may be empress, but sound methods remain queen\",\"authors\":\"Betty Onyura\",\"doi\":\"10.1111/medu.15374\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The appetite for evaluation of health professions education (HPE) programmes appears insatiable. Mandates to evaluate prevail and administrators persist in their quests for data to inform decision-making. Understandably, people want to know if their efforts ‘move the needle’ for today's learners and tomorrow's clinicians. Regulatory bodies want assurances that accreditation standards are met. And, communities deserve evidence-grounded accountability from those charged with training clinician workforces. Interestingly, however, the unabating appetite for evaluation exists in contrast to relatively modest scholarship or capacity-building on evaluation across HPE. Thus, it is encouraging to see contributions such as the one from Rees et al.<span><sup>1</sup></span> in this issue of <i>Medical Education</i>.</p><p>Rees et al.<span><sup>1</sup></span> present a comprehensive overview of the realist interview method. Following critical analysis of select realist evaluation studies, they outline best practice recommendations for applying realist interview techniques to realist evaluation. This work is timely, given the increasing popularity of realist evaluation across HPE, which may be a promising indicator of methodological expansion in the field's evaluation practices. Indeed, several scholars have critiqued the field's seeming preoccupation with simple outcome verification as the ‘go-to’ evaluation strategy.<span><sup>2-6</sup></span> Figuratively, the crux of this critique has been that when it comes to evaluation, HPE has focused on the trees—familiar, uniform trees—while neglecting the diversity of the forest. Indeed, I contend that embracing methodological diversity is the essence of ‘good’ evaluation. Not only does it enable the exploration of novel lines of inquiry, it also facilitates needed reflection on or redirection of how we are using evaluation to assign value (see Gates<span><sup>7</sup></span>). Conversely, asking similar questions of similar objects in routinised ways cannot satiate the appetite for new insights or innovative, evidence-informed, shifts away from a sub-optimal status quo across our academic healthcare systems.<span><sup>8-10</sup></span></p><p>As such, Rees et al.<span><sup>1</sup></span> present a strong value proposition for how realist evaluation expands possibilities for evaluative inquiry. Using examples, they illustrate the many avenues that emerge when a realist approach is used to explore how programme contexts may variably trigger change mechanisms and influence differences in outcomes. However, championing the merits of methodological diversity in evaluation is incidental to their aims. Rather, their core argument is best summarised as follows: The uptake of contemporary evaluation methodologies is of limited value if it is not accompanied by proper implementation of associated methods. Simply put, evaluators must adopt sound methods with fidelity to their espoused methodologies.</p><p>Additional points stand out.</p><p>One, not all interview methods are equal. Though interviews are a common data collection method, there are unique features of interview techniques across different research traditions and evaluation methodologies. For example, Rees et al.<span><sup>1</sup></span> explain how realist interviewing differs from interviewing within certain qualitative traditions where the focus is on eliciting participants' experiences on a given topic. In contrast, realist interview techniques are designed to elicit information to support theory building and testing of assumptions about programme functioning.</p><p>Two, realist interviewing requires balancing the tension of being both teacher and learner in the service of credible theory building. Rees et al.<span><sup>1</sup></span> describe the ‘teacher–learner cycle’ as a core tenet of realist interviewing. Realist interviewers must hold knowledge about both the intervention and about theoretical linkages that feasibly explain its functioning. Interviewers must share some of this insight with interviewees, while maintaining the flexibility and humility required to challenge their assumptions and refine theories in response to insights from interviewees' experiential knowledge.</p><p>Three, a diverse sampling frame across those who are conversant with the programme is desirable. Rees et al.<span><sup>1</sup></span> contend that whenever there is an indication that a programme may function differently for different stakeholder groups or across different settings, then it is particularly important that realist interviewers ensure participant diversity. At first glance, this guidance seems spot-on. Evaluators need to recruit a diverse participant pool by sampling those who have varied insights about programmes. Moreover, it is arguably an ethical imperative to recruit participants who would just as readily discuss ‘why the programme doesn't work’ and ‘how the programme could fail’ as they would share ‘how it works and for whom’. Upon a closer look, however, the sampling guidance surfaces a notable limitation of realist interviewing.</p><p>Qualifying for participation in the ways the authors' outline requires that the ideal target participant meet multiple criteria. These seemingly include (i) significant literacy about the programme, (ii) capacity to verbally explain how potentially nuanced experiences may trigger one outcome versus another, (iii) cognitive capacity to understand and accurately interpret interviewers' assumptions about programme functioning, (iv) availability and willingness to support refinement of potentially abstract theories about a complex intervention and (v) ability to identify realist interviewers' potentially erroneous assumptions about programme functioning, <i>plus</i> (vi) the confidence to challenge such assumptions in real-time. Alas! Realist interview method's appeals for participant diversity appear tough to reconcile with its seeming demand for erudite or otherwise socially assured participants with the gift of both time and articulation. Such sampling requirements could lead to equity gaps, inadvertently privileging certain perspectives over others. Indeed, this issue sorely highlights the necessity of their argument for improving evaluation reporting.</p><p>Following a review of select realist studies, Rees et al.<span><sup>1</sup></span> spotlight transparency in evaluation reporting—as both a gap—and best practice imperative. They note that common transparency gaps include scholars' failures to report sampling approaches or interview techniques. This message must not be discounted. Embracing methodological diversity is necessary but insufficient for high-quality evaluation. Essentially, it is inadequate to simply value evaluation methodology as a gauge that indicates whether our ‘needle’ is shifting. We must also take on the responsibility to monitor the quality and operations of the gauge itself. Rees et al. help us do that, by instructing us on how to ‘operate’ the realist interview method. Perhaps, next, scholars can explore how complementary methods can mitigate the approach's limitations.</p><p><b>Betty Onyura:</b> Conceptualization; writing—review and editing; writing—original draft.</p>\",\"PeriodicalId\":18370,\"journal\":{\"name\":\"Medical Education\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/medu.15374\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/medu.15374\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Education","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/medu.15374","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

摘要

对卫生专业教育(HPE)课程进行评估的欲望似乎无法满足。评估任务盛行,管理者坚持不懈地寻求数据,为决策提供依据。可以理解的是,人们想知道他们的努力是否 "推动 "了今天的学习者和明天的临床医生。监管机构希望确保达到认证标准。同时,社会各界也希望负责培训临床医生队伍的人员能够承担有据可依的责任。然而,有趣的是,对评估的需求有增无减,与之形成鲜明对比的是,在整个高等教育中,有关评估的学术研究或能力建设却相对较少。因此,在本期《医学教育》杂志上看到 Rees 等人1 的文章令人鼓舞。Rees 等人1 全面概述了现实主义访谈法。在对部分现实主义评价研究进行批判性分析之后,他们概述了将现实主义访谈技术应用于现实主义评价的最佳实践建议。这项工作非常及时,因为现实主义评价在整个 HPE 中越来越受欢迎,这可能是该领域评价实践方法扩展的一个有希望的指标。事实上,一些学者已经批评了该领域似乎专注于将简单的结果验证作为 "常用的 "评价策略2-6。形象地说,这种批评的核心是,当涉及到评价时,HPE 只关注树木--熟悉的、千篇一律的树木--而忽视了森林的多样性。事实上,我认为方法的多样性是 "好 "评估的精髓。它不仅有助于探索新的研究方向,还有助于对我们如何利用评价来分配价值进行必要的反思或重新定位(见 Gates7)。反之,以常规的方式对类似的对象提出类似的问题,并不能满足我们对新见解的渴望, 也不能满足我们对创新的渴望,更不能满足我们的学术医疗保健系统对次优现状的改变8-10。8-10 拥抱方法的多样性是 "好 "评价的精髓,它不仅能够探索新的研究方向,还能促进我们对如何使用评价来分配价值进行必要的反思或重新定向。因此,Rees 等人1 提出了一个强有力的价值主张,即现实主义评价如何扩展评价研究的可能性。他们举例说明了在使用现实主义方法探索计划背景如何以不同方式引发变革机制并影响结果差异时出现的许多途径。然而,倡导评估方法多样性的优点只是他们的附带目的。相反,他们的核心论点最好概括如下:如果不适当地实施相关的方法,采用当代评估方法的价值是有限的。简而言之,评估人员必须采用可靠的方法,忠实于他们所信奉的方法。尽管访谈是一种常见的数据收集方法,但在不同的研究传统和评估方法中,访谈技术都有其独特的特点。例如,Rees 等人1 解释了现实主义访谈与某些定性传统中的访谈有何不同,后者的重点是激发参与者对特定主题的体验。尽管访谈是一种常见的数据收集方法,但在不同的研究传统和评估方法中,访谈技术都有其独特之处。其二,现实主义访谈需要平衡教师与学习者之间的紧张关系,以服务于可信的理论建设。Rees 等人1 将 "教师-学习者循环 "描述为现实主义访谈的核心原则。现实主义访谈者必须同时掌握有关干预措施和理论联系的知识,这些知识可以解释干预措施的运作。访谈者必须与受访者分享其中的一些见解,同时保持必要的灵活性和谦逊,以便根据受访者的经验知识对其假设提出质疑并完善理论。Rees 等人1 认为,只要有迹象表明,一项计划对不同的利益相关者群体或在不同的环境下可能发挥不同的作用,那么现实主义访谈者就必须确保参与者的多样性,这一点尤为重要。乍一看,这一指导似乎很有道理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
‘Good’ evaluation: Methodological diversity may be empress, but sound methods remain queen

The appetite for evaluation of health professions education (HPE) programmes appears insatiable. Mandates to evaluate prevail and administrators persist in their quests for data to inform decision-making. Understandably, people want to know if their efforts ‘move the needle’ for today's learners and tomorrow's clinicians. Regulatory bodies want assurances that accreditation standards are met. And, communities deserve evidence-grounded accountability from those charged with training clinician workforces. Interestingly, however, the unabating appetite for evaluation exists in contrast to relatively modest scholarship or capacity-building on evaluation across HPE. Thus, it is encouraging to see contributions such as the one from Rees et al.1 in this issue of Medical Education.

Rees et al.1 present a comprehensive overview of the realist interview method. Following critical analysis of select realist evaluation studies, they outline best practice recommendations for applying realist interview techniques to realist evaluation. This work is timely, given the increasing popularity of realist evaluation across HPE, which may be a promising indicator of methodological expansion in the field's evaluation practices. Indeed, several scholars have critiqued the field's seeming preoccupation with simple outcome verification as the ‘go-to’ evaluation strategy.2-6 Figuratively, the crux of this critique has been that when it comes to evaluation, HPE has focused on the trees—familiar, uniform trees—while neglecting the diversity of the forest. Indeed, I contend that embracing methodological diversity is the essence of ‘good’ evaluation. Not only does it enable the exploration of novel lines of inquiry, it also facilitates needed reflection on or redirection of how we are using evaluation to assign value (see Gates7). Conversely, asking similar questions of similar objects in routinised ways cannot satiate the appetite for new insights or innovative, evidence-informed, shifts away from a sub-optimal status quo across our academic healthcare systems.8-10

As such, Rees et al.1 present a strong value proposition for how realist evaluation expands possibilities for evaluative inquiry. Using examples, they illustrate the many avenues that emerge when a realist approach is used to explore how programme contexts may variably trigger change mechanisms and influence differences in outcomes. However, championing the merits of methodological diversity in evaluation is incidental to their aims. Rather, their core argument is best summarised as follows: The uptake of contemporary evaluation methodologies is of limited value if it is not accompanied by proper implementation of associated methods. Simply put, evaluators must adopt sound methods with fidelity to their espoused methodologies.

Additional points stand out.

One, not all interview methods are equal. Though interviews are a common data collection method, there are unique features of interview techniques across different research traditions and evaluation methodologies. For example, Rees et al.1 explain how realist interviewing differs from interviewing within certain qualitative traditions where the focus is on eliciting participants' experiences on a given topic. In contrast, realist interview techniques are designed to elicit information to support theory building and testing of assumptions about programme functioning.

Two, realist interviewing requires balancing the tension of being both teacher and learner in the service of credible theory building. Rees et al.1 describe the ‘teacher–learner cycle’ as a core tenet of realist interviewing. Realist interviewers must hold knowledge about both the intervention and about theoretical linkages that feasibly explain its functioning. Interviewers must share some of this insight with interviewees, while maintaining the flexibility and humility required to challenge their assumptions and refine theories in response to insights from interviewees' experiential knowledge.

Three, a diverse sampling frame across those who are conversant with the programme is desirable. Rees et al.1 contend that whenever there is an indication that a programme may function differently for different stakeholder groups or across different settings, then it is particularly important that realist interviewers ensure participant diversity. At first glance, this guidance seems spot-on. Evaluators need to recruit a diverse participant pool by sampling those who have varied insights about programmes. Moreover, it is arguably an ethical imperative to recruit participants who would just as readily discuss ‘why the programme doesn't work’ and ‘how the programme could fail’ as they would share ‘how it works and for whom’. Upon a closer look, however, the sampling guidance surfaces a notable limitation of realist interviewing.

Qualifying for participation in the ways the authors' outline requires that the ideal target participant meet multiple criteria. These seemingly include (i) significant literacy about the programme, (ii) capacity to verbally explain how potentially nuanced experiences may trigger one outcome versus another, (iii) cognitive capacity to understand and accurately interpret interviewers' assumptions about programme functioning, (iv) availability and willingness to support refinement of potentially abstract theories about a complex intervention and (v) ability to identify realist interviewers' potentially erroneous assumptions about programme functioning, plus (vi) the confidence to challenge such assumptions in real-time. Alas! Realist interview method's appeals for participant diversity appear tough to reconcile with its seeming demand for erudite or otherwise socially assured participants with the gift of both time and articulation. Such sampling requirements could lead to equity gaps, inadvertently privileging certain perspectives over others. Indeed, this issue sorely highlights the necessity of their argument for improving evaluation reporting.

Following a review of select realist studies, Rees et al.1 spotlight transparency in evaluation reporting—as both a gap—and best practice imperative. They note that common transparency gaps include scholars' failures to report sampling approaches or interview techniques. This message must not be discounted. Embracing methodological diversity is necessary but insufficient for high-quality evaluation. Essentially, it is inadequate to simply value evaluation methodology as a gauge that indicates whether our ‘needle’ is shifting. We must also take on the responsibility to monitor the quality and operations of the gauge itself. Rees et al. help us do that, by instructing us on how to ‘operate’ the realist interview method. Perhaps, next, scholars can explore how complementary methods can mitigate the approach's limitations.

Betty Onyura: Conceptualization; writing—review and editing; writing—original draft.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Medical Education
Medical Education 医学-卫生保健
CiteScore
8.40
自引率
10.00%
发文量
279
审稿时长
4-8 weeks
期刊介绍: Medical Education seeks to be the pre-eminent journal in the field of education for health care professionals, and publishes material of the highest quality, reflecting world wide or provocative issues and perspectives. The journal welcomes high quality papers on all aspects of health professional education including; -undergraduate education -postgraduate training -continuing professional development -interprofessional education
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信