比较谷歌和 ChatGPT 提供的青光眼相关问题回复和教育材料的准确性和可读性。

Q3 Medicine
Journal of Current Glaucoma Practice Pub Date : 2024-07-01 Epub Date: 2024-10-29 DOI:10.5005/jp-journals-10078-1448
Samuel A Cohen, Ann C Fisher, Benjamin Y Xu, Brian J Song
{"title":"比较谷歌和 ChatGPT 提供的青光眼相关问题回复和教育材料的准确性和可读性。","authors":"Samuel A Cohen, Ann C Fisher, Benjamin Y Xu, Brian J Song","doi":"10.5005/jp-journals-10078-1448","DOIUrl":null,"url":null,"abstract":"<p><strong>Aim and background: </strong>Patients are increasingly turning to the internet to learn more about their ocular disease. In this study, we sought (1) to compare the accuracy and readability of Google and ChatGPT responses to patients' glaucoma-related frequently asked questions (FAQs) and (2) to evaluate ChatGPT's capacity to improve glaucoma patient education materials by accurately reducing the grade level at which they are written.</p><p><strong>Materials and methods: </strong>We executed a Google search to identify the three most common FAQs related to 10 search terms associated with glaucoma diagnosis and treatment. Each of the 30 FAQs was inputted into both Google and ChatGPT and responses were recorded. The accuracy of responses was evaluated by three glaucoma specialists while readability was assessed using five validated readability indices. Subsequently, ChatGPT was instructed to generate patient education materials at specific reading levels to explain seven glaucoma procedures. The accuracy and readability of procedural explanations were measured.</p><p><strong>Results: </strong>ChatGPT responses to glaucoma FAQs were significantly more accurate than Google responses (97 vs 77% accuracy, respectively, <i>p</i> < 0.001). ChatGPT responses were also written at a significantly higher reading level (grade 14.3 vs 9.4, respectively, <i>p</i> < 0.001). When instructed to revise glaucoma procedural explanations to improve understandability, ChatGPT reduced the average reading level of educational materials from grade 16.6 (college level) to grade 9.4 (high school level) (<i>p</i> < 0.001) without reducing the accuracy of procedural explanations.</p><p><strong>Conclusion: </strong>ChatGPT is more accurate than Google search when responding to glaucoma patient FAQs. ChatGPT successfully reduced the reading level of glaucoma procedural explanations without sacrificing accuracy, with implications for the future of customized patient education for patients with varying health literacy.</p><p><strong>Clinical significance: </strong>Our study demonstrates the utility of ChatGPT for patients seeking information about glaucoma and for physicians when creating unique patient education materials at reading levels that optimize understanding by patients. An enhanced patient understanding of glaucoma may lead to informed decision-making and improve treatment compliance.</p><p><strong>How to cite this article: </strong>Cohen SA, Fisher AC, Xu BY, <i>et al.</i> Comparing the Accuracy and Readability of Glaucoma-related Question Responses and Educational Materials by Google and ChatGPT. J Curr Glaucoma Pract 2024;18(3):110-116.</p>","PeriodicalId":15419,"journal":{"name":"Journal of Current Glaucoma Practice","volume":"18 3","pages":"110-116"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11576343/pdf/","citationCount":"0","resultStr":"{\"title\":\"Comparing the Accuracy and Readability of Glaucoma-related Question Responses and Educational Materials by Google and ChatGPT.\",\"authors\":\"Samuel A Cohen, Ann C Fisher, Benjamin Y Xu, Brian J Song\",\"doi\":\"10.5005/jp-journals-10078-1448\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Aim and background: </strong>Patients are increasingly turning to the internet to learn more about their ocular disease. In this study, we sought (1) to compare the accuracy and readability of Google and ChatGPT responses to patients' glaucoma-related frequently asked questions (FAQs) and (2) to evaluate ChatGPT's capacity to improve glaucoma patient education materials by accurately reducing the grade level at which they are written.</p><p><strong>Materials and methods: </strong>We executed a Google search to identify the three most common FAQs related to 10 search terms associated with glaucoma diagnosis and treatment. Each of the 30 FAQs was inputted into both Google and ChatGPT and responses were recorded. The accuracy of responses was evaluated by three glaucoma specialists while readability was assessed using five validated readability indices. Subsequently, ChatGPT was instructed to generate patient education materials at specific reading levels to explain seven glaucoma procedures. The accuracy and readability of procedural explanations were measured.</p><p><strong>Results: </strong>ChatGPT responses to glaucoma FAQs were significantly more accurate than Google responses (97 vs 77% accuracy, respectively, <i>p</i> < 0.001). ChatGPT responses were also written at a significantly higher reading level (grade 14.3 vs 9.4, respectively, <i>p</i> < 0.001). When instructed to revise glaucoma procedural explanations to improve understandability, ChatGPT reduced the average reading level of educational materials from grade 16.6 (college level) to grade 9.4 (high school level) (<i>p</i> < 0.001) without reducing the accuracy of procedural explanations.</p><p><strong>Conclusion: </strong>ChatGPT is more accurate than Google search when responding to glaucoma patient FAQs. ChatGPT successfully reduced the reading level of glaucoma procedural explanations without sacrificing accuracy, with implications for the future of customized patient education for patients with varying health literacy.</p><p><strong>Clinical significance: </strong>Our study demonstrates the utility of ChatGPT for patients seeking information about glaucoma and for physicians when creating unique patient education materials at reading levels that optimize understanding by patients. An enhanced patient understanding of glaucoma may lead to informed decision-making and improve treatment compliance.</p><p><strong>How to cite this article: </strong>Cohen SA, Fisher AC, Xu BY, <i>et al.</i> Comparing the Accuracy and Readability of Glaucoma-related Question Responses and Educational Materials by Google and ChatGPT. J Curr Glaucoma Pract 2024;18(3):110-116.</p>\",\"PeriodicalId\":15419,\"journal\":{\"name\":\"Journal of Current Glaucoma Practice\",\"volume\":\"18 3\",\"pages\":\"110-116\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11576343/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Current Glaucoma Practice\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5005/jp-journals-10078-1448\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/10/29 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Current Glaucoma Practice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5005/jp-journals-10078-1448","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/10/29 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

摘要

目的和背景:越来越多的患者通过互联网来了解他们眼部疾病的相关知识。在这项研究中,我们试图:(1)比较谷歌和 ChatGPT 对患者青光眼相关常见问题(FAQs)的回答的准确性和可读性;(2)评估 ChatGPT 通过准确降低青光眼患者教育材料的编写水平来改进这些材料的能力:我们通过谷歌搜索,找出了与青光眼诊断和治疗相关的 10 个搜索词中最常见的三个常见问题。在谷歌和 ChatGPT 中输入这 30 个常见问题中的每一个,并记录回复。回答的准确性由三位青光眼专家进行评估,可读性则使用五个经过验证的可读性指数进行评估。随后,ChatGPT 被指示生成符合特定阅读水平的患者教育材料,以解释七种青光眼手术。对程序解释的准确性和可读性进行了测量:结果:ChatGPT 回答青光眼常见问题的准确性明显高于谷歌回答(准确率分别为 97% 和 77%,p < 0.001)。ChatGPT 回答的阅读水平也明显更高(分别为 14.3 级对 9.4 级,p < 0.001)。当被要求修改青光眼程序性解释以提高可理解性时,ChatGPT 将教育材料的平均阅读水平从 16.6 级(大学水平)降至 9.4 级(高中水平)(p < 0.001),而没有降低程序性解释的准确性:结论:在回答青光眼患者常见问题时,ChatGPT 比谷歌搜索更准确。ChatGPT 成功地降低了青光眼程序解释的阅读水平,同时又不影响其准确性,这对未来针对不同健康素养的患者开展定制化患者教育具有重要意义:临床意义:我们的研究证明了 ChatGPT 对寻求青光眼相关信息的患者和医生的实用性,医生可以根据患者的阅读水平制作独特的患者教育材料,从而优化患者的理解能力。加强患者对青光眼的了解可帮助他们做出明智的决策,提高治疗依从性:Cohen SA, Fisher AC, Xu BY, et al.J Curr Glaucoma Pract 2024;18(3):110-116.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Comparing the Accuracy and Readability of Glaucoma-related Question Responses and Educational Materials by Google and ChatGPT.

Aim and background: Patients are increasingly turning to the internet to learn more about their ocular disease. In this study, we sought (1) to compare the accuracy and readability of Google and ChatGPT responses to patients' glaucoma-related frequently asked questions (FAQs) and (2) to evaluate ChatGPT's capacity to improve glaucoma patient education materials by accurately reducing the grade level at which they are written.

Materials and methods: We executed a Google search to identify the three most common FAQs related to 10 search terms associated with glaucoma diagnosis and treatment. Each of the 30 FAQs was inputted into both Google and ChatGPT and responses were recorded. The accuracy of responses was evaluated by three glaucoma specialists while readability was assessed using five validated readability indices. Subsequently, ChatGPT was instructed to generate patient education materials at specific reading levels to explain seven glaucoma procedures. The accuracy and readability of procedural explanations were measured.

Results: ChatGPT responses to glaucoma FAQs were significantly more accurate than Google responses (97 vs 77% accuracy, respectively, p < 0.001). ChatGPT responses were also written at a significantly higher reading level (grade 14.3 vs 9.4, respectively, p < 0.001). When instructed to revise glaucoma procedural explanations to improve understandability, ChatGPT reduced the average reading level of educational materials from grade 16.6 (college level) to grade 9.4 (high school level) (p < 0.001) without reducing the accuracy of procedural explanations.

Conclusion: ChatGPT is more accurate than Google search when responding to glaucoma patient FAQs. ChatGPT successfully reduced the reading level of glaucoma procedural explanations without sacrificing accuracy, with implications for the future of customized patient education for patients with varying health literacy.

Clinical significance: Our study demonstrates the utility of ChatGPT for patients seeking information about glaucoma and for physicians when creating unique patient education materials at reading levels that optimize understanding by patients. An enhanced patient understanding of glaucoma may lead to informed decision-making and improve treatment compliance.

How to cite this article: Cohen SA, Fisher AC, Xu BY, et al. Comparing the Accuracy and Readability of Glaucoma-related Question Responses and Educational Materials by Google and ChatGPT. J Curr Glaucoma Pract 2024;18(3):110-116.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Current Glaucoma Practice
Journal of Current Glaucoma Practice Medicine-Ophthalmology
CiteScore
1.00
自引率
0.00%
发文量
38
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信