Patient Perspectives on Artificial Intelligence in Health Care: Focus Group Study for Diagnostic Communication and Tool Implementation.

Q2 Medicine
Garrett Foresman, Joshua Biro, Alberta Tran, Kate MacRae, Sadaf Kazi, Laura Schubel, Adam Visconti, William Gallagher, Kelly M Smith, Traber Giardina, Helen Haskell, Kristen Miller
{"title":"Patient Perspectives on Artificial Intelligence in Health Care: Focus Group Study for Diagnostic Communication and Tool Implementation.","authors":"Garrett Foresman, Joshua Biro, Alberta Tran, Kate MacRae, Sadaf Kazi, Laura Schubel, Adam Visconti, William Gallagher, Kelly M Smith, Traber Giardina, Helen Haskell, Kristen Miller","doi":"10.2196/69564","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) is rapidly transforming health care, offering potential benefits in diagnosis, treatment, and workflow efficiency. However, limited research explores patient perspectives on AI, especially in its role in diagnosis and communication. This study examines patient perceptions of various AI applications, focusing on the diagnostic process and communication.</p><p><strong>Objective: </strong>This study aimed to examine patient perspectives on AI use in health care, particularly in diagnostic processes and communication, identifying key concerns, expectations, and opportunities to guide the development and implementation of AI tools.</p><p><strong>Methods: </strong>This study used a qualitative focus group methodology with co-design principles to explore patient and family member perspectives on AI in clinical practice. A single 2-hour session was conducted with 17 adult participants. The session included interactive activities and breakout sessions focused on five specific AI scenarios relevant to diagnosis and communication: (1) portal messaging, (2) radiology review, (3) digital scribe, (4) virtual human, and (5) decision support. The session was audio-recorded and transcribed, with facilitator notes and demographic questionnaires collected. Data were analyzed using inductive thematic analysis by 2 independent researchers (GF and JB), with discrepancies resolved via consensus.</p><p><strong>Results: </strong>Participants reported varying comfort levels with AI applications contingent on the level of patient interaction, with digital scribe (average 4.24, range 2-5) and radiology review (average 4.00, range 2-5) being the highest, and virtual human (average 1.68, range 1-4) being the lowest. In total, five cross-cutting themes emerged: (1) validation (concerns about model reliability), (2) usability (impact on diagnostic processes), (3) transparency (expectations for disclosing AI usage), (4) opportunities (potential for AI to improve care), and (5) privacy (concerns about data security). Participants valued the co-design session and felt they had a significant say in the discussions.</p><p><strong>Conclusions: </strong>This study highlights the importance of incorporating patient perspectives in the design and implementation of AI tools in health care. Transparency, human oversight, clear communication, and data privacy are crucial for patient trust and acceptance of AI in diagnostic processes. These findings inform strategies for individual clinicians, health care organizations, and policy makers to ensure responsible and patient-centered AI deployment in health care.</p>","PeriodicalId":36208,"journal":{"name":"Journal of Participatory Medicine","volume":"17 ","pages":"e69564"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Participatory Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/69564","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Artificial intelligence (AI) is rapidly transforming health care, offering potential benefits in diagnosis, treatment, and workflow efficiency. However, limited research explores patient perspectives on AI, especially in its role in diagnosis and communication. This study examines patient perceptions of various AI applications, focusing on the diagnostic process and communication.

Objective: This study aimed to examine patient perspectives on AI use in health care, particularly in diagnostic processes and communication, identifying key concerns, expectations, and opportunities to guide the development and implementation of AI tools.

Methods: This study used a qualitative focus group methodology with co-design principles to explore patient and family member perspectives on AI in clinical practice. A single 2-hour session was conducted with 17 adult participants. The session included interactive activities and breakout sessions focused on five specific AI scenarios relevant to diagnosis and communication: (1) portal messaging, (2) radiology review, (3) digital scribe, (4) virtual human, and (5) decision support. The session was audio-recorded and transcribed, with facilitator notes and demographic questionnaires collected. Data were analyzed using inductive thematic analysis by 2 independent researchers (GF and JB), with discrepancies resolved via consensus.

Results: Participants reported varying comfort levels with AI applications contingent on the level of patient interaction, with digital scribe (average 4.24, range 2-5) and radiology review (average 4.00, range 2-5) being the highest, and virtual human (average 1.68, range 1-4) being the lowest. In total, five cross-cutting themes emerged: (1) validation (concerns about model reliability), (2) usability (impact on diagnostic processes), (3) transparency (expectations for disclosing AI usage), (4) opportunities (potential for AI to improve care), and (5) privacy (concerns about data security). Participants valued the co-design session and felt they had a significant say in the discussions.

Conclusions: This study highlights the importance of incorporating patient perspectives in the design and implementation of AI tools in health care. Transparency, human oversight, clear communication, and data privacy are crucial for patient trust and acceptance of AI in diagnostic processes. These findings inform strategies for individual clinicians, health care organizations, and policy makers to ensure responsible and patient-centered AI deployment in health care.

患者对医疗保健中人工智能的看法:诊断沟通和工具实施的焦点小组研究。
背景:人工智能(AI)正在迅速改变医疗保健,在诊断、治疗和工作流程效率方面提供了潜在的好处。然而,有限的研究探讨了患者对人工智能的看法,特别是它在诊断和交流中的作用。本研究考察了患者对各种人工智能应用的看法,重点是诊断过程和沟通。目的:本研究旨在研究患者对人工智能在医疗保健中使用的看法,特别是在诊断过程和沟通中,确定关键问题、期望和机会,以指导人工智能工具的开发和实施。方法:本研究采用定性焦点小组方法和协同设计原则,探讨患者和家属在临床实践中对人工智能的看法。17名成年参与者进行了一个2小时的单独会议。会议包括互动活动和分组会议,重点讨论与诊断和通信相关的五个特定人工智能场景:(1)门户消息传递,(2)放射学审查,(3)数字抄写员,(4)虚拟人,以及(5)决策支持。会议进行了录音和笔录,并收集了主持人的说明和人口问题单。数据由2位独立研究者(GF和JB)采用归纳主题分析进行分析,差异通过共识解决。结果:参与者报告了人工智能应用的不同舒适度,这取决于患者的互动水平,其中数字转录(平均4.24,范围2-5)和放射检查(平均4.00,范围2-5)是最高的,虚拟人(平均1.68,范围1-4)是最低的。总共出现了五个交叉主题:(1)验证(对模型可靠性的担忧),(2)可用性(对诊断过程的影响),(3)透明度(披露人工智能使用的期望),(4)机会(人工智能改善护理的潜力),以及(5)隐私(对数据安全的担忧)。参与者重视共同设计会议,并感到他们在讨论中有重要的发言权。结论:本研究强调了在医疗保健人工智能工具的设计和实施中纳入患者观点的重要性。透明度、人为监督、清晰的沟通和数据隐私对于患者信任和接受诊断过程中的人工智能至关重要。这些发现为个体临床医生、卫生保健组织和政策制定者提供了战略信息,以确保在卫生保健中负责任和以患者为中心的人工智能部署。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Participatory Medicine
Journal of Participatory Medicine Medicine-Medicine (miscellaneous)
CiteScore
3.20
自引率
0.00%
发文量
8
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信