Anuj Gupta, Adil Basha, Tarun R Sontam, William J Hlavinka, Brett J Croen, Cherry Abdou, Mohammed Abdullah, Rita Hamilton
{"title":"复杂局部疼痛综合征患者教育材料的大语言人工智能模型演变:患者在学习吗?","authors":"Anuj Gupta, Adil Basha, Tarun R Sontam, William J Hlavinka, Brett J Croen, Cherry Abdou, Mohammed Abdullah, Rita Hamilton","doi":"10.1080/08998280.2025.2470033","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>This study assessed the comprehensiveness and readability of medical information about complex regional pain syndrome provided by ChatGPT, an artificial intelligence (AI) chatbot, and Google using standardized scoring systems.</p><p><strong>Design: </strong>A Google search was conducted using the term \"complex regional pain syndrome,\" and the first 10 frequently asked questions (FAQs) and answers generated were recorded. ChatGPT was presented these FAQs generated by Google, and its responses were evaluated alongside Google's answers using multiple metrics. ChatGPT was then asked to generate its own set of 10 FAQs and answers.</p><p><strong>Results: </strong>ChatGPT's answers were significantly longer than Google's in response to both independently generated questions (330.0 ± 51.3 words, <i>P</i> < 0.0001) and Google-generated questions (289.7 ± 40.6 words, <i>P</i> < 0.0001). ChatGPT's answers to Google-generated questions were more difficult to read based on the Flesch-Kincaid Reading Ease Score (13.6 ± 10.8, <i>P</i> = 0.017).</p><p><strong>Conclusions: </strong>Our findings suggest that ChatGPT is a promising tool for patient education regarding complex regional pain syndrome based on its ability to generate a variety of question topics with responses from credible sources. That said, challenges such as readability and ethical considerations must be addressed prior to its widespread use for health information.</p>","PeriodicalId":8828,"journal":{"name":"Baylor University Medical Center Proceedings","volume":"38 3","pages":"221-226"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12057770/pdf/","citationCount":"0","resultStr":"{\"title\":\"Evolution of patient education materials from large-language artificial intelligence models on complex regional pain syndrome: are patients learning?\",\"authors\":\"Anuj Gupta, Adil Basha, Tarun R Sontam, William J Hlavinka, Brett J Croen, Cherry Abdou, Mohammed Abdullah, Rita Hamilton\",\"doi\":\"10.1080/08998280.2025.2470033\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>This study assessed the comprehensiveness and readability of medical information about complex regional pain syndrome provided by ChatGPT, an artificial intelligence (AI) chatbot, and Google using standardized scoring systems.</p><p><strong>Design: </strong>A Google search was conducted using the term \\\"complex regional pain syndrome,\\\" and the first 10 frequently asked questions (FAQs) and answers generated were recorded. ChatGPT was presented these FAQs generated by Google, and its responses were evaluated alongside Google's answers using multiple metrics. ChatGPT was then asked to generate its own set of 10 FAQs and answers.</p><p><strong>Results: </strong>ChatGPT's answers were significantly longer than Google's in response to both independently generated questions (330.0 ± 51.3 words, <i>P</i> < 0.0001) and Google-generated questions (289.7 ± 40.6 words, <i>P</i> < 0.0001). ChatGPT's answers to Google-generated questions were more difficult to read based on the Flesch-Kincaid Reading Ease Score (13.6 ± 10.8, <i>P</i> = 0.017).</p><p><strong>Conclusions: </strong>Our findings suggest that ChatGPT is a promising tool for patient education regarding complex regional pain syndrome based on its ability to generate a variety of question topics with responses from credible sources. That said, challenges such as readability and ethical considerations must be addressed prior to its widespread use for health information.</p>\",\"PeriodicalId\":8828,\"journal\":{\"name\":\"Baylor University Medical Center Proceedings\",\"volume\":\"38 3\",\"pages\":\"221-226\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12057770/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Baylor University Medical Center Proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/08998280.2025.2470033\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Baylor University Medical Center Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/08998280.2025.2470033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0
摘要
目的:本研究采用标准化评分系统评估人工智能聊天机器人ChatGPT和谷歌提供的复杂区域疼痛综合征医疗信息的全面性和可读性。设计:使用术语“复杂局部疼痛综合征”进行谷歌搜索,并记录前10个常见问题(FAQs)和产生的答案。ChatGPT展示了谷歌生成的这些常见问题,并使用多个指标对其回答和谷歌的答案进行了评估。然后,ChatGPT被要求生成自己的10个常见问题和答案。结果:ChatGPT对两个独立生成问题的回答均明显长于谷歌(330.0±51.3 words, P P P = 0.017)。结论:我们的研究结果表明,ChatGPT是一种很有前途的工具,用于患者关于复杂区域疼痛综合征的教育,基于它能够产生各种问题主题,并从可靠的来源得到回应。尽管如此,在将其广泛用于卫生信息之前,必须先解决可读性和伦理考虑等挑战。
Evolution of patient education materials from large-language artificial intelligence models on complex regional pain syndrome: are patients learning?
Objectives: This study assessed the comprehensiveness and readability of medical information about complex regional pain syndrome provided by ChatGPT, an artificial intelligence (AI) chatbot, and Google using standardized scoring systems.
Design: A Google search was conducted using the term "complex regional pain syndrome," and the first 10 frequently asked questions (FAQs) and answers generated were recorded. ChatGPT was presented these FAQs generated by Google, and its responses were evaluated alongside Google's answers using multiple metrics. ChatGPT was then asked to generate its own set of 10 FAQs and answers.
Results: ChatGPT's answers were significantly longer than Google's in response to both independently generated questions (330.0 ± 51.3 words, P < 0.0001) and Google-generated questions (289.7 ± 40.6 words, P < 0.0001). ChatGPT's answers to Google-generated questions were more difficult to read based on the Flesch-Kincaid Reading Ease Score (13.6 ± 10.8, P = 0.017).
Conclusions: Our findings suggest that ChatGPT is a promising tool for patient education regarding complex regional pain syndrome based on its ability to generate a variety of question topics with responses from credible sources. That said, challenges such as readability and ethical considerations must be addressed prior to its widespread use for health information.