Differences in Physician Performance and Self-rated Confidence on High- and Low-Stakes Knowledge Assessments in Board Certification.

IF 1.6 4区 教育学 Q2 EDUCATION, SCIENTIFIC DISCIPLINES
David W Price, Ting Wang, Thomas R O'Neill, Andrew Bazemore, Warren P Newton
{"title":"Differences in Physician Performance and Self-rated Confidence on High- and Low-Stakes Knowledge Assessments in Board Certification.","authors":"David W Price, Ting Wang, Thomas R O'Neill, Andrew Bazemore, Warren P Newton","doi":"10.1097/CEH.0000000000000487","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Evidence links assessment to optimal learning, affirming that physicians are more likely to study, learn, and practice skills when some form of consequence (\"stakes\") may result from an assessment. We lack evidence, however, on how physicians' confidence in their knowledge relates to performance on assessments, and whether this varies based on the stakes of the assessment.</p><p><strong>Methods: </strong>Our retrospective repeated-measures design compared differences in patterns of physician answer accuracy and answer confidence among physicians participating in both a high-stakes and a low-stakes longitudinal assessment of the American Board of Family Medicine.</p><p><strong>Results: </strong>After 1 and 2 years, participants were more often correct but less confident in their accuracy on a higher-stakes longitudinal knowledge assessment compared with a lower-stakes assessment. There were no differences in question difficulty between the two platforms. Variation existed between platforms in time spent answering questions, use of resources to answer questions, and perceived question relevance to practice.</p><p><strong>Discussion: </strong>This novel study of physician certification suggests that the accuracy of physician performance increases with higher stakes, even as self-reported confidence in their knowledge declines. It suggests that physicians may be more engaged in higher-stakes compared with lower-stakes assessments. With medical knowledge growing exponentially, these analyses provide an example of the complementary roles of higher- and lower-stakes knowledge assessment in supporting physician learning during continuing specialty board certification.</p>","PeriodicalId":50218,"journal":{"name":"Journal of Continuing Education in the Health Professions","volume":" ","pages":"2-10"},"PeriodicalIF":1.6000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Continuing Education in the Health Professions","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1097/CEH.0000000000000487","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/3/7 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Evidence links assessment to optimal learning, affirming that physicians are more likely to study, learn, and practice skills when some form of consequence ("stakes") may result from an assessment. We lack evidence, however, on how physicians' confidence in their knowledge relates to performance on assessments, and whether this varies based on the stakes of the assessment.

Methods: Our retrospective repeated-measures design compared differences in patterns of physician answer accuracy and answer confidence among physicians participating in both a high-stakes and a low-stakes longitudinal assessment of the American Board of Family Medicine.

Results: After 1 and 2 years, participants were more often correct but less confident in their accuracy on a higher-stakes longitudinal knowledge assessment compared with a lower-stakes assessment. There were no differences in question difficulty between the two platforms. Variation existed between platforms in time spent answering questions, use of resources to answer questions, and perceived question relevance to practice.

Discussion: This novel study of physician certification suggests that the accuracy of physician performance increases with higher stakes, even as self-reported confidence in their knowledge declines. It suggests that physicians may be more engaged in higher-stakes compared with lower-stakes assessments. With medical knowledge growing exponentially, these analyses provide an example of the complementary roles of higher- and lower-stakes knowledge assessment in supporting physician learning during continuing specialty board certification.

医师资格认证中高分和低分知识评估中医师表现和自评信心的差异。
导言:有证据表明,评估与最佳学习之间存在联系,当评估可能导致某种形式的后果("利害关系")时,医生就更有可能研究、学习和练习技能。然而,对于医生对自己知识的信心与评估表现之间的关系,以及这种关系是否因评估的利害关系而有所不同,我们还缺乏证据:我们的回顾性重复测量设计比较了参加美国全科医学委员会高风险和低风险纵向评估的医生在答案准确性和答案自信心方面的差异:结果:1 年和 2 年后,参加高风险纵向知识评估的医师与参加低风险评估的医师相比,正确率更高,但对自己的准确性信心更低。两个平台的问题难度没有差异。在回答问题所花费的时间、回答问题所使用的资源以及认为问题与实践的相关性方面,不同平台之间存在差异:这项新颖的医师认证研究表明,医师成绩的准确性随着赌注的增加而提高,即使自我报告对其知识的信心有所下降。它表明,与低风险评估相比,高风险评估可能更能吸引医生的参与。随着医学知识的飞速增长,这些分析提供了一个例子,说明高风险和低风险知识评估在支持医生继续专科委员会认证学习方面的互补作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.00
自引率
16.70%
发文量
85
审稿时长
>12 weeks
期刊介绍: The Journal of Continuing Education is a quarterly journal publishing articles relevant to theory, practice, and policy development for continuing education in the health sciences. The journal presents original research and essays on subjects involving the lifelong learning of professionals, with a focus on continuous quality improvement, competency assessment, and knowledge translation. It provides thoughtful advice to those who develop, conduct, and evaluate continuing education programs.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信