Evolution of patient education materials from large-language artificial intelligence models on complex regional pain syndrome: are patients learning?

Q3 Medicine
Baylor University Medical Center Proceedings Pub Date : 2025-02-28 eCollection Date: 2025-01-01 DOI:10.1080/08998280.2025.2470033
Anuj Gupta, Adil Basha, Tarun R Sontam, William J Hlavinka, Brett J Croen, Cherry Abdou, Mohammed Abdullah, Rita Hamilton
{"title":"Evolution of patient education materials from large-language artificial intelligence models on complex regional pain syndrome: are patients learning?","authors":"Anuj Gupta, Adil Basha, Tarun R Sontam, William J Hlavinka, Brett J Croen, Cherry Abdou, Mohammed Abdullah, Rita Hamilton","doi":"10.1080/08998280.2025.2470033","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>This study assessed the comprehensiveness and readability of medical information about complex regional pain syndrome provided by ChatGPT, an artificial intelligence (AI) chatbot, and Google using standardized scoring systems.</p><p><strong>Design: </strong>A Google search was conducted using the term \"complex regional pain syndrome,\" and the first 10 frequently asked questions (FAQs) and answers generated were recorded. ChatGPT was presented these FAQs generated by Google, and its responses were evaluated alongside Google's answers using multiple metrics. ChatGPT was then asked to generate its own set of 10 FAQs and answers.</p><p><strong>Results: </strong>ChatGPT's answers were significantly longer than Google's in response to both independently generated questions (330.0 ± 51.3 words, <i>P</i> < 0.0001) and Google-generated questions (289.7 ± 40.6 words, <i>P</i> < 0.0001). ChatGPT's answers to Google-generated questions were more difficult to read based on the Flesch-Kincaid Reading Ease Score (13.6 ± 10.8, <i>P</i> = 0.017).</p><p><strong>Conclusions: </strong>Our findings suggest that ChatGPT is a promising tool for patient education regarding complex regional pain syndrome based on its ability to generate a variety of question topics with responses from credible sources. That said, challenges such as readability and ethical considerations must be addressed prior to its widespread use for health information.</p>","PeriodicalId":8828,"journal":{"name":"Baylor University Medical Center Proceedings","volume":"38 3","pages":"221-226"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12057770/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Baylor University Medical Center Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/08998280.2025.2470033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: This study assessed the comprehensiveness and readability of medical information about complex regional pain syndrome provided by ChatGPT, an artificial intelligence (AI) chatbot, and Google using standardized scoring systems.

Design: A Google search was conducted using the term "complex regional pain syndrome," and the first 10 frequently asked questions (FAQs) and answers generated were recorded. ChatGPT was presented these FAQs generated by Google, and its responses were evaluated alongside Google's answers using multiple metrics. ChatGPT was then asked to generate its own set of 10 FAQs and answers.

Results: ChatGPT's answers were significantly longer than Google's in response to both independently generated questions (330.0 ± 51.3 words, P < 0.0001) and Google-generated questions (289.7 ± 40.6 words, P < 0.0001). ChatGPT's answers to Google-generated questions were more difficult to read based on the Flesch-Kincaid Reading Ease Score (13.6 ± 10.8, P = 0.017).

Conclusions: Our findings suggest that ChatGPT is a promising tool for patient education regarding complex regional pain syndrome based on its ability to generate a variety of question topics with responses from credible sources. That said, challenges such as readability and ethical considerations must be addressed prior to its widespread use for health information.

复杂局部疼痛综合征患者教育材料的大语言人工智能模型演变:患者在学习吗?
目的:本研究采用标准化评分系统评估人工智能聊天机器人ChatGPT和谷歌提供的复杂区域疼痛综合征医疗信息的全面性和可读性。设计:使用术语“复杂局部疼痛综合征”进行谷歌搜索,并记录前10个常见问题(FAQs)和产生的答案。ChatGPT展示了谷歌生成的这些常见问题,并使用多个指标对其回答和谷歌的答案进行了评估。然后,ChatGPT被要求生成自己的10个常见问题和答案。结果:ChatGPT对两个独立生成问题的回答均明显长于谷歌(330.0±51.3 words, P P P = 0.017)。结论:我们的研究结果表明,ChatGPT是一种很有前途的工具,用于患者关于复杂区域疼痛综合征的教育,基于它能够产生各种问题主题,并从可靠的来源得到回应。尽管如此,在将其广泛用于卫生信息之前,必须先解决可读性和伦理考虑等挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.30
自引率
0.00%
发文量
245
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信