Public Perceptions of AI in Medicine and Implications for Future Medical Education: Cross-Sectional Survey.

IF 2 Q3 HEALTH CARE SCIENCES & SERVICES
Michael Constantin Kirchberger
{"title":"Public Perceptions of AI in Medicine and Implications for Future Medical Education: Cross-Sectional Survey.","authors":"Michael Constantin Kirchberger","doi":"10.2196/89123","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The integration of artificial intelligence (AI) into clinical practice is contingent on public trust. This trust often depends on physician oversight, yet a significant gap exists between the need for AI-competent physicians and the current state of medical education. While the perspectives of students and experts on this gap are known, the views of the US general public remain largely unquantified.</p><p><strong>Objective: </strong>This study aimed to assess US public perceptions regarding AI in medicine and the corresponding emergent needs for medical education. We specifically sought to quantify public trust in different diagnostic scenarios, concerns about physician overreliance on AI, support for mandatory AI education, and priorities for the future focus of medical training.</p><p><strong>Methods: </strong>We conducted a cross-sectional, web-based survey of adults in the United States in November 2025. Participants (N=524) were recruited via SurveyMonkey Audience. We calculated descriptive statistics, frequencies, proportions (percentages), and 95% CIs for all main survey items.</p><p><strong>Results: </strong>A total of 524 participants completed the survey. Most (n=329, 62.8%; 95% CI 58.6%-66.9%) placed the most trust in a physician's diagnosis based on their expertise alone; only 7.8% (n=41; 95% CI 5.5%-10.1%) trusted an AI-first diagnostic model. Trust was highly contingent on training: 93.9% (n=492) of participants rated formal physician training on AI limitations as \"essential\" or \"very important.\" Widespread concern about physician overreliance on AI was reported, with 81.1% (n=425) being \"very concerned\" or \"extremely concerned.\" Consequently, 85.1% (n=446) agreed or strongly agreed that training on AI use, ethics, and limitations should be mandatory in medical school. When asked about future educational priorities, 70.2% (n=368; 95% CI 66.3%-74.1%) believed that medical education should focus on human-centered skills (eg, empathy and communication) over clinical skills.</p><p><strong>Conclusions: </strong>The US public expressed conditional trust in medical AI, strongly preferring physician-led and critically supervised models. These findings reveal a clear public mandate for medical education reform. The public expects future physicians to be mandatorily trained to appraise AI, understand its limitations, and refocus their professional development on the human-centered skills that technology cannot replace.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"10 ","pages":"e89123"},"PeriodicalIF":2.0000,"publicationDate":"2026-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13082342/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Formative Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/89123","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: The integration of artificial intelligence (AI) into clinical practice is contingent on public trust. This trust often depends on physician oversight, yet a significant gap exists between the need for AI-competent physicians and the current state of medical education. While the perspectives of students and experts on this gap are known, the views of the US general public remain largely unquantified.

Objective: This study aimed to assess US public perceptions regarding AI in medicine and the corresponding emergent needs for medical education. We specifically sought to quantify public trust in different diagnostic scenarios, concerns about physician overreliance on AI, support for mandatory AI education, and priorities for the future focus of medical training.

Methods: We conducted a cross-sectional, web-based survey of adults in the United States in November 2025. Participants (N=524) were recruited via SurveyMonkey Audience. We calculated descriptive statistics, frequencies, proportions (percentages), and 95% CIs for all main survey items.

Results: A total of 524 participants completed the survey. Most (n=329, 62.8%; 95% CI 58.6%-66.9%) placed the most trust in a physician's diagnosis based on their expertise alone; only 7.8% (n=41; 95% CI 5.5%-10.1%) trusted an AI-first diagnostic model. Trust was highly contingent on training: 93.9% (n=492) of participants rated formal physician training on AI limitations as "essential" or "very important." Widespread concern about physician overreliance on AI was reported, with 81.1% (n=425) being "very concerned" or "extremely concerned." Consequently, 85.1% (n=446) agreed or strongly agreed that training on AI use, ethics, and limitations should be mandatory in medical school. When asked about future educational priorities, 70.2% (n=368; 95% CI 66.3%-74.1%) believed that medical education should focus on human-centered skills (eg, empathy and communication) over clinical skills.

Conclusions: The US public expressed conditional trust in medical AI, strongly preferring physician-led and critically supervised models. These findings reveal a clear public mandate for medical education reform. The public expects future physicians to be mandatorily trained to appraise AI, understand its limitations, and refocus their professional development on the human-centered skills that technology cannot replace.

公众对医学中人工智能的看法及其对未来医学教育的影响:横断面调查。
背景:人工智能(AI)融入临床实践取决于公众的信任。这种信任往往依赖于医生的监督,然而,对人工智能医生的需求与目前的医学教育状况之间存在着巨大的差距。虽然学生和专家对这一差距的看法是已知的,但美国公众的看法在很大程度上仍无法量化。目的:本研究旨在评估美国公众对医学中人工智能的看法以及相应的医学教育的紧急需求。我们特别寻求量化公众对不同诊断情景的信任,对医生过度依赖人工智能的担忧,对强制性人工智能教育的支持,以及未来医疗培训重点的优先事项。方法:我们于2025年11月对美国成年人进行了一项基于网络的横断面调查。参与者(N=524)是通过SurveyMonkey Audience招募的。我们计算了所有主要调查项目的描述性统计、频率、比例(百分比)和95% ci。结果:共有524名参与者完成了调查。大多数(n=329, 62.8%; 95% CI 58.6%-66.9%)最信任医生仅凭其专业知识做出的诊断;只有7.8% (n=41; 95% CI 5.5%-10.1%)相信人工智能优先诊断模型。信任高度依赖于培训:93.9% (n=492)的参与者认为关于人工智能局限性的正式医生培训是“必要的”或“非常重要的”。据报道,人们普遍担心医生过度依赖人工智能,81.1% (n=425)的人“非常担心”或“非常担心”。因此,85.1% (n=446)的人同意或强烈同意,在医学院应该强制进行人工智能使用、伦理和限制方面的培训。当被问及未来的教育重点时,70.2% (n=368; 95% CI 66.3%-74.1%)认为医学教育应侧重于以人为本的技能(如同理心和沟通),而不是临床技能。结论:美国公众对医疗人工智能表达了有条件的信任,强烈倾向于医生主导和严格监督的模型。这些发现揭示了公众对医学教育改革的明确要求。公众期望未来的医生接受强制性培训,以评估人工智能,了解其局限性,并将他们的专业发展重新集中在技术无法取代的以人为本的技能上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
JMIR Formative Research
JMIR Formative Research Medicine-Medicine (miscellaneous)
CiteScore
2.70
自引率
9.10%
发文量
579
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书