桥接人工智能和医学教育:导航对齐悖论。

IF 1.7 Q3 CRITICAL CARE MEDICINE
Laurah Turner, Michelle I Knopp, Eneida A Mendonca, Sanjay Desai
{"title":"桥接人工智能和医学教育:导航对齐悖论。","authors":"Laurah Turner, Michelle I Knopp, Eneida A Mendonca, Sanjay Desai","doi":"10.34197/ats-scholar.2024-0086PS","DOIUrl":null,"url":null,"abstract":"<p><p>The integration of artificial intelligence (AI) into medical education presents both unprecedented opportunities and significant challenges, epitomized by the \"alignment paradox.\" This paradox asks: How do we ensure AI systems remain aligned with our educational goals? For instance, AI could create highly personalized learning pathways, but this might conflict with educators' intentions for structured skill development. This paper proposes a framework to address this paradox, focusing on four key principles: ethics, robustness, interpretability, and scalable oversight. We examine the current landscape of AI in medical education, highlighting its potential to enhance learning experiences, improve clinical decision making, and personalize education. We review ethical considerations, emphasize the importance of robustness across diverse healthcare settings, and present interpretability as crucial for effective human-AI collaboration. For example, AI-based feedback systems like i-SIDRA enable real-time, actionable feedback, enhancing interpretability while reducing cognitive overload. The concept of scalable oversight is introduced to maintain human control while leveraging AI's autonomy. We outline strategies for implementing this oversight, including directable behaviors and human-AI collaboration techniques. With this road map, we aim to support the medical education community in responsibly harnessing AI's power in its educational systems.</p>","PeriodicalId":72330,"journal":{"name":"ATS scholar","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bridging Artificial Intelligence and Medical Education: Navigating the Alignment Paradox.\",\"authors\":\"Laurah Turner, Michelle I Knopp, Eneida A Mendonca, Sanjay Desai\",\"doi\":\"10.34197/ats-scholar.2024-0086PS\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The integration of artificial intelligence (AI) into medical education presents both unprecedented opportunities and significant challenges, epitomized by the \\\"alignment paradox.\\\" This paradox asks: How do we ensure AI systems remain aligned with our educational goals? For instance, AI could create highly personalized learning pathways, but this might conflict with educators' intentions for structured skill development. This paper proposes a framework to address this paradox, focusing on four key principles: ethics, robustness, interpretability, and scalable oversight. We examine the current landscape of AI in medical education, highlighting its potential to enhance learning experiences, improve clinical decision making, and personalize education. We review ethical considerations, emphasize the importance of robustness across diverse healthcare settings, and present interpretability as crucial for effective human-AI collaboration. For example, AI-based feedback systems like i-SIDRA enable real-time, actionable feedback, enhancing interpretability while reducing cognitive overload. The concept of scalable oversight is introduced to maintain human control while leveraging AI's autonomy. We outline strategies for implementing this oversight, including directable behaviors and human-AI collaboration techniques. With this road map, we aim to support the medical education community in responsibly harnessing AI's power in its educational systems.</p>\",\"PeriodicalId\":72330,\"journal\":{\"name\":\"ATS scholar\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ATS scholar\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.34197/ats-scholar.2024-0086PS\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"CRITICAL CARE MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ATS scholar","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34197/ats-scholar.2024-0086PS","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"CRITICAL CARE MEDICINE","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)与医学教育的融合既带来了前所未有的机遇,也带来了重大挑战,集中体现在“对齐悖论”上。这个悖论提出了一个问题:我们如何确保人工智能系统与我们的教育目标保持一致?例如,人工智能可以创造高度个性化的学习途径,但这可能与教育工作者对结构化技能发展的意图相冲突。本文提出了一个解决这一悖论的框架,重点关注四个关键原则:道德、鲁棒性、可解释性和可扩展的监督。我们研究了人工智能在医学教育中的现状,强调了它在增强学习体验、改善临床决策和个性化教育方面的潜力。我们回顾了伦理方面的考虑,强调了在不同医疗环境中稳健性的重要性,并提出了可解释性对于有效的人类-人工智能协作至关重要。例如,像i-SIDRA这样的基于ai的反馈系统可以实现实时、可操作的反馈,增强可解释性,同时减少认知过载。引入可扩展监督的概念是为了在利用人工智能自主性的同时保持人类控制。我们概述了实施这种监督的策略,包括可指导的行为和人类-人工智能协作技术。通过这一路线图,我们的目标是支持医学教育界在其教育系统中负责任地利用人工智能的力量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Bridging Artificial Intelligence and Medical Education: Navigating the Alignment Paradox.

The integration of artificial intelligence (AI) into medical education presents both unprecedented opportunities and significant challenges, epitomized by the "alignment paradox." This paradox asks: How do we ensure AI systems remain aligned with our educational goals? For instance, AI could create highly personalized learning pathways, but this might conflict with educators' intentions for structured skill development. This paper proposes a framework to address this paradox, focusing on four key principles: ethics, robustness, interpretability, and scalable oversight. We examine the current landscape of AI in medical education, highlighting its potential to enhance learning experiences, improve clinical decision making, and personalize education. We review ethical considerations, emphasize the importance of robustness across diverse healthcare settings, and present interpretability as crucial for effective human-AI collaboration. For example, AI-based feedback systems like i-SIDRA enable real-time, actionable feedback, enhancing interpretability while reducing cognitive overload. The concept of scalable oversight is introduced to maintain human control while leveraging AI's autonomy. We outline strategies for implementing this oversight, including directable behaviors and human-AI collaboration techniques. With this road map, we aim to support the medical education community in responsibly harnessing AI's power in its educational systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.00
自引率
0.00%
发文量
0
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信