Entrustment and EPAs for Artificial Intelligence (AI): A Framework to Safeguard the Use of AI in Health Professions Education.

IF 5.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES
Academic Medicine Pub Date : 2025-03-01 Epub Date: 2024-11-14 DOI:10.1097/ACM.0000000000005930
Brian C Gin, Patricia S O'Sullivan, Karen E Hauer, Raja-Elie Abdulnour, Madelynn Mackenzie, Olle Ten Cate, Christy K Boscardin
{"title":"Entrustment and EPAs for Artificial Intelligence (AI): A Framework to Safeguard the Use of AI in Health Professions Education.","authors":"Brian C Gin, Patricia S O'Sullivan, Karen E Hauer, Raja-Elie Abdulnour, Madelynn Mackenzie, Olle Ten Cate, Christy K Boscardin","doi":"10.1097/ACM.0000000000005930","DOIUrl":null,"url":null,"abstract":"<p><strong>Abstract: </strong>In this article, the authors propose a repurposing of the concept of entrustment to help guide the use of artificial intelligence (AI) in health professions education (HPE). Entrustment can help identify and mitigate the risks of incorporating generative AI tools with limited transparency about their accuracy, source material, and disclosure of bias into HPE practice. With AI's growing role in education-related activities, like automated medical school application screening and feedback quality and content appraisal, there is a critical need for a trust-based approach to ensure these technologies are beneficial and safe. Drawing parallels with HPE's entrustment concept, which assesses a trainee's readiness to perform clinical tasks-or entrustable professional activities-the authors propose assessing the trustworthiness of AI tools to perform an HPE-related task across 3 characteristics: ability (competence to perform tasks accurately), integrity (transparency and honesty), and benevolence (alignment with ethical principles). The authors draw on existing theories of entrustment decision-making to envision a structured way to decide on AI's role and level of engagement in HPE-related tasks, including proposing an AI-specific entrustment scale. Identifying tasks that AI could be entrusted with provides a focus around which considerations of trustworthiness and entrustment decision-making may be synthesized, making explicit the risks associated with AI use and identifying strategies to mitigate these risks. Responsible, trustworthy, and ethical use of AI requires health professions educators to develop safeguards for using it in teaching, learning, and practice-guardrails that can be operationalized via applying the entrustment concept to AI. Without such safeguards, HPE practice stands to be shaped by the oncoming wave of AI innovations tied to commercial motivations, rather than modeled after HPE principles-principles rooted in the trust and transparency that are built together with trainees and patients.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":"100 3","pages":"264-272"},"PeriodicalIF":5.3000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Medicine","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1097/ACM.0000000000005930","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/11/14 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract: In this article, the authors propose a repurposing of the concept of entrustment to help guide the use of artificial intelligence (AI) in health professions education (HPE). Entrustment can help identify and mitigate the risks of incorporating generative AI tools with limited transparency about their accuracy, source material, and disclosure of bias into HPE practice. With AI's growing role in education-related activities, like automated medical school application screening and feedback quality and content appraisal, there is a critical need for a trust-based approach to ensure these technologies are beneficial and safe. Drawing parallels with HPE's entrustment concept, which assesses a trainee's readiness to perform clinical tasks-or entrustable professional activities-the authors propose assessing the trustworthiness of AI tools to perform an HPE-related task across 3 characteristics: ability (competence to perform tasks accurately), integrity (transparency and honesty), and benevolence (alignment with ethical principles). The authors draw on existing theories of entrustment decision-making to envision a structured way to decide on AI's role and level of engagement in HPE-related tasks, including proposing an AI-specific entrustment scale. Identifying tasks that AI could be entrusted with provides a focus around which considerations of trustworthiness and entrustment decision-making may be synthesized, making explicit the risks associated with AI use and identifying strategies to mitigate these risks. Responsible, trustworthy, and ethical use of AI requires health professions educators to develop safeguards for using it in teaching, learning, and practice-guardrails that can be operationalized via applying the entrustment concept to AI. Without such safeguards, HPE practice stands to be shaped by the oncoming wave of AI innovations tied to commercial motivations, rather than modeled after HPE principles-principles rooted in the trust and transparency that are built together with trainees and patients.

人工智能(AI)的委托和EPAs:保障人工智能在卫生专业教育中使用的框架。
摘要:在本文中,作者提出了委托概念的重新定义,以帮助指导人工智能(AI)在卫生专业教育(HPE)中的应用。委托可以帮助识别和减轻将生成式人工智能工具整合到惠普实践中的风险,这些工具的准确性、来源材料和偏见披露的透明度有限。随着人工智能在教育相关活动中的作用越来越大,如自动化医学院申请筛选和反馈质量和内容评估,迫切需要一种基于信任的方法来确保这些技术是有益和安全的。与HPE的委托概念(评估实习生执行临床任务或可委托的专业活动的准备情况)相似,作者建议评估人工智能工具的可信度,以执行HPE相关的任务,涉及三个特征:能力(准确执行任务的能力)、完整性(透明和诚实)和仁慈(符合道德原则)。作者利用现有的委托决策理论,设想了一种结构化的方式来决定人工智能在hpe相关任务中的作用和参与程度,包括提出了一个特定于人工智能的委托规模。确定人工智能可以委托的任务提供了一个重点,可以综合考虑可信度和委托决策,明确与人工智能使用相关的风险,并确定减轻这些风险的策略。负责任、值得信赖和合乎道德地使用人工智能,要求卫生专业教育工作者制定在教学、学习和实践中使用人工智能的保障措施——可以通过将委托概念应用于人工智能来实施的保障措施。如果没有这样的保障措施,HPE的实践将受到即将到来的与商业动机相关的人工智能创新浪潮的影响,而不是以HPE的原则为蓝本——这些原则植根于与学员和患者共同建立的信任和透明度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Academic Medicine
Academic Medicine 医学-卫生保健
CiteScore
7.80
自引率
9.50%
发文量
982
审稿时长
3-6 weeks
期刊介绍: Academic Medicine, the official peer-reviewed journal of the Association of American Medical Colleges, acts as an international forum for exchanging ideas, information, and strategies to address the significant challenges in academic medicine. The journal covers areas such as research, education, clinical care, community collaboration, and leadership, with a commitment to serving the public interest.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信