Attitudes Toward AI Usage in Patient Health Care: Evidence From a Population Survey Vignette Experiment.

IF 5.8 2区 医学 Q1 HEALTH CARE SCIENCES & SERVICES
Simon Kühne, Jannes Jacobsen, Nicolas Legewie, Jörg Dollmann
{"title":"Attitudes Toward AI Usage in Patient Health Care: Evidence From a Population Survey Vignette Experiment.","authors":"Simon Kühne, Jannes Jacobsen, Nicolas Legewie, Jörg Dollmann","doi":"10.2196/70179","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The integration of artificial intelligence (AI) holds substantial potential to alter diagnostics and treatment in health care settings. However, public attitudes toward AI, including trust and risk perception, are key to its ethical and effective adoption. Despite growing interest, empirical research on the factors shaping public support for AI in health care (particularly in large-scale, representative contexts) remains limited.</p><p><strong>Objective: </strong>This study aimed to investigate public attitudes toward AI in patient health care, focusing on how AI attributes (autonomy, costs, reliability, and transparency) shape perceptions of support, risk, and personalized care. In addition, it examines the moderating role of sociodemographic characteristics (gender, age, educational level, migration background, and subjective health status) in these evaluations. Our study offers novel insights into the relative importance of AI system characteristics for public attitudes and acceptance.</p><p><strong>Methods: </strong>We conducted a factorial vignette experiment with a probability-based survey of 3030 participants from Germany's general population. Respondents were presented with hypothetical scenarios involving AI applications in diagnosis and treatment in a hospital setting. Linear regression models assessed the relative influence of AI attributes on the dependent variables (support, risk perception, and personalized care), with additional subgroup analyses to explore heterogeneity by sociodemographic characteristics.</p><p><strong>Results: </strong>Mean values between 4.2 and 4.4 on a 1-7 scale indicate a generally neutral to slightly negative stance toward AI integration in terms of general support, risk perception, and personalized care expectations, with responses spanning the full scale from strong support to strong opposition. Among the 4 dimensions, reliability emerges as the most influential factor (percentage of explained variance [EV] of up to 10.5%). Respondents expect AI to not only prevent errors but also exceed current reliability standards while strongly disapproving of nontraceable systems (transparency is another important factor, percentage of EV of up to 4%). Costs and autonomy play a comparatively minor role (percentage of EVs of up to 1.5% and 1.3%), with preferences favoring collaborative AI systems over autonomous ones, and higher costs generally leading to rejection. Heterogeneity analysis reveals limited sociodemographic differences, with education and migration background influencing attitudes toward transparency and autonomy, and gender differences primarily affecting cost-related perceptions. Overall, attitudes do not substantially differ between AI applications in diagnosis versus treatment.</p><p><strong>Conclusions: </strong>Our study fills a critical research gap by identifying the key factors that shape public trust and acceptance of AI in health care, particularly reliability, transparency, and patient-centered approaches. Our findings provide evidence-based recommendations for policy makers, health care providers, and AI developers to enhance trust and accountability, key concerns often overlooked in system development and real-world applications. The study highlights the need for targeted policy and educational initiatives to support the responsible integration of AI in patient care.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"27 ","pages":"e70179"},"PeriodicalIF":5.8000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Internet Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/70179","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: The integration of artificial intelligence (AI) holds substantial potential to alter diagnostics and treatment in health care settings. However, public attitudes toward AI, including trust and risk perception, are key to its ethical and effective adoption. Despite growing interest, empirical research on the factors shaping public support for AI in health care (particularly in large-scale, representative contexts) remains limited.

Objective: This study aimed to investigate public attitudes toward AI in patient health care, focusing on how AI attributes (autonomy, costs, reliability, and transparency) shape perceptions of support, risk, and personalized care. In addition, it examines the moderating role of sociodemographic characteristics (gender, age, educational level, migration background, and subjective health status) in these evaluations. Our study offers novel insights into the relative importance of AI system characteristics for public attitudes and acceptance.

Methods: We conducted a factorial vignette experiment with a probability-based survey of 3030 participants from Germany's general population. Respondents were presented with hypothetical scenarios involving AI applications in diagnosis and treatment in a hospital setting. Linear regression models assessed the relative influence of AI attributes on the dependent variables (support, risk perception, and personalized care), with additional subgroup analyses to explore heterogeneity by sociodemographic characteristics.

Results: Mean values between 4.2 and 4.4 on a 1-7 scale indicate a generally neutral to slightly negative stance toward AI integration in terms of general support, risk perception, and personalized care expectations, with responses spanning the full scale from strong support to strong opposition. Among the 4 dimensions, reliability emerges as the most influential factor (percentage of explained variance [EV] of up to 10.5%). Respondents expect AI to not only prevent errors but also exceed current reliability standards while strongly disapproving of nontraceable systems (transparency is another important factor, percentage of EV of up to 4%). Costs and autonomy play a comparatively minor role (percentage of EVs of up to 1.5% and 1.3%), with preferences favoring collaborative AI systems over autonomous ones, and higher costs generally leading to rejection. Heterogeneity analysis reveals limited sociodemographic differences, with education and migration background influencing attitudes toward transparency and autonomy, and gender differences primarily affecting cost-related perceptions. Overall, attitudes do not substantially differ between AI applications in diagnosis versus treatment.

Conclusions: Our study fills a critical research gap by identifying the key factors that shape public trust and acceptance of AI in health care, particularly reliability, transparency, and patient-centered approaches. Our findings provide evidence-based recommendations for policy makers, health care providers, and AI developers to enhance trust and accountability, key concerns often overlooked in system development and real-world applications. The study highlights the need for targeted policy and educational initiatives to support the responsible integration of AI in patient care.

对患者医疗保健中人工智能使用的态度:来自人口调查小插图实验的证据。
背景:人工智能(AI)的集成具有改变医疗保健环境诊断和治疗的巨大潜力。然而,公众对人工智能的态度,包括信任和风险感知,是道德和有效采用人工智能的关键。尽管人们对人工智能越来越感兴趣,但对影响公众对卫生保健领域人工智能支持的因素(特别是在大规模、具有代表性的背景下)的实证研究仍然有限。目的:本研究旨在调查公众对患者医疗保健中人工智能的态度,重点关注人工智能属性(自主性、成本、可靠性和透明度)如何影响对支持、风险和个性化护理的看法。此外,它还考察了社会人口特征(性别、年龄、教育水平、移民背景和主观健康状况)在这些评价中的调节作用。我们的研究为人工智能系统特征对公众态度和接受程度的相对重要性提供了新的见解。方法:我们对来自德国普通人群的3030名参与者进行了基于概率的调查,进行了一个因子小插图实验。向受访者展示了涉及人工智能在医院诊断和治疗中的应用的假设场景。线性回归模型评估了人工智能属性对因变量(支持、风险感知和个性化护理)的相对影响,并通过额外的亚组分析来探索社会人口统计学特征的异质性。结果:在1-7的评分范围内,4.2至4.4的平均值表明,在一般支持、风险感知和个性化护理期望方面,对人工智能集成的态度总体上是中性的,略微消极,反应从强烈支持到强烈反对。在4个维度中,可靠性成为影响最大的因素(解释方差百分比[EV]高达10.5%)。受访者期望人工智能不仅能防止错误,还能超越目前的可靠性标准,同时强烈反对不可追溯的系统(透明度是另一个重要因素,占EV的比例高达4%)。成本和自主性所起的作用相对较小(电动汽车的比例分别为1.5%和1.3%),人们更倾向于协作式人工智能系统,而不是自主式人工智能系统,而更高的成本通常会导致拒绝。异质性分析显示,社会人口统计学差异有限,教育和移民背景影响对透明度和自主性的态度,性别差异主要影响与成本相关的看法。总体而言,人工智能在诊断和治疗方面的应用态度没有实质性差异。结论:我们的研究通过确定影响公众对人工智能在医疗保健中的信任和接受程度的关键因素,特别是可靠性、透明度和以患者为中心的方法,填补了一个关键的研究空白。我们的研究结果为政策制定者、卫生保健提供者和人工智能开发人员提供了基于证据的建议,以增强信任和问责制,这是系统开发和实际应用中经常被忽视的关键问题。该研究强调需要有针对性的政策和教育举措,以支持人工智能在患者护理中的负责任地整合。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
14.40
自引率
5.40%
发文量
654
审稿时长
1 months
期刊介绍: The Journal of Medical Internet Research (JMIR) is a highly respected publication in the field of health informatics and health services. With a founding date in 1999, JMIR has been a pioneer in the field for over two decades. As a leader in the industry, the journal focuses on digital health, data science, health informatics, and emerging technologies for health, medicine, and biomedical research. It is recognized as a top publication in these disciplines, ranking in the first quartile (Q1) by Impact Factor. Notably, JMIR holds the prestigious position of being ranked #1 on Google Scholar within the "Medical Informatics" discipline.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信