Health Consumers' Use and Perceptions of Health Information from Generative Artificial Intelligence Chatbots: A Scoping Review.

IF 2.2 2区 医学 Q4 MEDICAL INFORMATICS
Applied Clinical Informatics Pub Date : 2025-08-01 Epub Date: 2025-07-02 DOI:10.1055/a-2647-1210
John Robert Bautista, Drew Herbert, Matthew Farmer, Ryan Q De Torres, Gil P Soriano, Charlene E Ronquillo
{"title":"Health Consumers' Use and Perceptions of Health Information from Generative Artificial Intelligence Chatbots: A Scoping Review.","authors":"John Robert Bautista, Drew Herbert, Matthew Farmer, Ryan Q De Torres, Gil P Soriano, Charlene E Ronquillo","doi":"10.1055/a-2647-1210","DOIUrl":null,"url":null,"abstract":"<p><p>Health consumers can use generative artificial intelligence (GenAI) chatbots to seek health information. As GenAI chatbots continue to improve and be adopted, it is crucial to examine how health information generated by such tools is used and perceived by health consumers.To conduct a scoping review of health consumers' use and perceptions of health information from GenAI chatbots.Arksey and O'Malley's five-step protocol was used to guide the scoping review. Following PRISMA guidelines, relevant empirical papers published on or after January 1, 2019, were retrieved between February and July 2024. Thematic and content analyses were performed.We retrieved 3,840 titles and reviewed 12 papers that included 13 studies (quantitative = 5, qualitative = 4, and mixed = 4). ChatGPT was used in 11 studies, while two studies used GPT-3. Most were conducted in the United States (<i>n</i> = 4). The studies involve general and specific (e.g., medical imaging, psychological health, and vaccination) health topics. One study explicitly used a theory. Eight studies were rated with excellent quality. Studies were categorized as user experience studies (<i>n</i> = 4), consumer surveys (<i>n</i> = 1), and evaluation studies (<i>n</i> = 8). Five studies examined health consumers' use of health information from GenAI chatbots. Perceptions focused on: (1) accuracy, reliability, or quality; (2) readability; (3) trust or trustworthiness; (4) privacy, confidentiality, security, or safety; (5) usefulness; (6) accessibility; (7) emotional appeal; (8) attitude; and (9) effectiveness.Although health consumers can use GenAI chatbots to obtain accessible, readable, and useful health information, negative perceptions of their accuracy, trustworthiness, effectiveness, and safety serve as barriers that must be addressed to mitigate health-related risks, improve health beliefs, and achieve positive health outcomes. More theory-based studies are needed to better understand how exposure to health information from GenAI chatbots affects health beliefs and outcomes.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":"892-902"},"PeriodicalIF":2.2000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12390362/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Clinical Informatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1055/a-2647-1210","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/7/2 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

Health consumers can use generative artificial intelligence (GenAI) chatbots to seek health information. As GenAI chatbots continue to improve and be adopted, it is crucial to examine how health information generated by such tools is used and perceived by health consumers.To conduct a scoping review of health consumers' use and perceptions of health information from GenAI chatbots.Arksey and O'Malley's five-step protocol was used to guide the scoping review. Following PRISMA guidelines, relevant empirical papers published on or after January 1, 2019, were retrieved between February and July 2024. Thematic and content analyses were performed.We retrieved 3,840 titles and reviewed 12 papers that included 13 studies (quantitative = 5, qualitative = 4, and mixed = 4). ChatGPT was used in 11 studies, while two studies used GPT-3. Most were conducted in the United States (n = 4). The studies involve general and specific (e.g., medical imaging, psychological health, and vaccination) health topics. One study explicitly used a theory. Eight studies were rated with excellent quality. Studies were categorized as user experience studies (n = 4), consumer surveys (n = 1), and evaluation studies (n = 8). Five studies examined health consumers' use of health information from GenAI chatbots. Perceptions focused on: (1) accuracy, reliability, or quality; (2) readability; (3) trust or trustworthiness; (4) privacy, confidentiality, security, or safety; (5) usefulness; (6) accessibility; (7) emotional appeal; (8) attitude; and (9) effectiveness.Although health consumers can use GenAI chatbots to obtain accessible, readable, and useful health information, negative perceptions of their accuracy, trustworthiness, effectiveness, and safety serve as barriers that must be addressed to mitigate health-related risks, improve health beliefs, and achieve positive health outcomes. More theory-based studies are needed to better understand how exposure to health information from GenAI chatbots affects health beliefs and outcomes.

健康消费者对生成式人工智能聊天机器人健康信息的使用和感知:范围审查
健康消费者可以使用生成式人工智能(GenAI)聊天机器人来寻求健康信息。随着GenAI聊天机器人的不断改进和被采用,研究由此类工具生成的健康信息如何被健康消费者使用和感知是至关重要的。目的对健康消费者对GenAI聊天机器人健康信息的使用和认知进行范围审查。方法采用Arksey和O'Malley五步方案指导范围评价。根据PRISMA的指导方针,在2024年2月至7月期间检索了2019年1月1日或之后发表的相关实证论文。进行了专题和内容分析。结果我们检索到3840篇文献,回顾了12篇论文,包括13项研究(定量= 5,定性= 4,混合= 4)。11项研究使用了ChatGPT, 2项研究使用了GPT-3。大多数在美国进行(n = 4)。这些研究涉及一般和具体的健康主题(例如,医学成像、心理健康和疫苗接种)。一项研究明确使用了一种理论。8项研究被评为优质。研究分为用户体验研究(n = 4)、消费者调查(n = 1)和评价研究(n = 8)。五项研究调查了健康消费者对GenAI聊天机器人健康信息的使用情况。感知集中在:(1)准确性、可靠性或质量;(2)可读性;(三)信任或者值得信赖;(4)隐私、保密、保安或安全;(5)实用性;(6)可访问性;(7)情调;(8)的态度;(9)有效性。尽管健康消费者可以使用GenAI聊天机器人获得可访问、可读和有用的健康信息,但对其准确性、可信度、有效性和安全性的负面看法是必须解决的障碍,以减轻与健康相关的风险,改善健康信念,实现积极的健康结果。需要更多基于理论的研究来更好地理解接触来自GenAI聊天机器人的健康信息如何影响健康信念和结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Applied Clinical Informatics
Applied Clinical Informatics MEDICAL INFORMATICS-
CiteScore
4.60
自引率
24.10%
发文量
132
期刊介绍: ACI is the third Schattauer journal dealing with biomedical and health informatics. It perfectly complements our other journals Öffnet internen Link im aktuellen FensterMethods of Information in Medicine and the Öffnet internen Link im aktuellen FensterYearbook of Medical Informatics. The Yearbook of Medical Informatics being the “Milestone” or state-of-the-art journal and Methods of Information in Medicine being the “Science and Research” journal of IMIA, ACI intends to be the “Practical” journal of IMIA.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信