IF 2.8 3区 医学 Q1 OPHTHALMOLOGY
Eye Pub Date : 2024-12-16 DOI:10.1038/s41433-024-03476-5
Qais A Dihan, Andrew D Brown, Muhammad Z Chauhan, Ahmad F Alzein, Seif E Abdelnaem, Sean D Kelso, Dania A Rahal, Royce Park, Mohammadali Ashraf, Amr Azzam, Mahmoud Morsi, David B Warner, Ahmed B Sallam, Hajirah N Saeed, Abdelrahman M Elhusseiny
{"title":"Leveraging large language models to improve patient education on dry eye disease.","authors":"Qais A Dihan, Andrew D Brown, Muhammad Z Chauhan, Ahmad F Alzein, Seif E Abdelnaem, Sean D Kelso, Dania A Rahal, Royce Park, Mohammadali Ashraf, Amr Azzam, Mahmoud Morsi, David B Warner, Ahmed B Sallam, Hajirah N Saeed, Abdelrahman M Elhusseiny","doi":"10.1038/s41433-024-03476-5","DOIUrl":null,"url":null,"abstract":"<p><strong>Background/objectives: </strong>Dry eye disease (DED) is an exceedingly common diagnosis in patients, yet recent analyses have demonstrated patient education materials (PEMs) on DED to be of low quality and readability. Our study evaluated the utility and performance of three large language models (LLMs) in enhancing and generating new patient education materials (PEMs) on dry eye disease (DED).</p><p><strong>Subjects/methods: </strong>We evaluated PEMs generated by ChatGPT-3.5, ChatGPT-4, Gemini Advanced, using three separate prompts. Prompts A and B requested they generate PEMs on DED, with Prompt B specifying a 6th-grade reading level, using the SMOG (Simple Measure of Gobbledygook) readability formula. Prompt C asked for a rewrite of existing PEMs at a 6th-grade reading level. Each PEM was assessed on readability (SMOG, FKGL: Flesch-Kincaid Grade Level), quality (PEMAT: Patient Education Materials Assessment Tool, DISCERN), and accuracy (Likert Misinformation scale).</p><p><strong>Results: </strong>All LLM-generated PEMs in response to Prompt A and B were of high quality (median DISCERN = 4), understandable (PEMAT understandability ≥70%) and accurate (Likert Score=1). LLM-generated PEMs were not actionable (PEMAT Actionability <70%). ChatGPT-4 and Gemini Advanced rewrote existing PEMs (Prompt C) from a baseline readability level (FKGL: 8.0 ± 2.4, SMOG: 7.9 ± 1.7) to targeted 6th-grade reading level; rewrites contained little to no misinformation (median Likert misinformation=1 (range: 1-2)). However, only ChatGPT-4 rewrote PEMs while maintaining high quality and reliability (median DISCERN = 4).</p><p><strong>Conclusion: </strong>LLMs (notably ChatGPT-4) were able to generate and rewrite PEMs on DED that were readable, accurate, and high quality. Our study underscores the value of leveraging LLMs as supplementary tools to improving PEMs.</p>","PeriodicalId":12125,"journal":{"name":"Eye","volume":" ","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eye","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1038/s41433-024-03476-5","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景/目的:干眼症(DED)是一种极为常见的眼病,但最近的分析表明,有关干眼症的患者教育材料(PEMs)的质量和可读性都很低。我们的研究评估了三种大型语言模型(LLM)在增强和生成新的干眼症(DED)患者教育材料(PEMs)方面的效用和性能:我们评估了 ChatGPT-3.5、ChatGPT-4 和 Gemini Advanced 使用三个不同提示生成的 PEM。提示 A 和 B 要求他们生成有关 DED 的 PEM,其中提示 B 使用 SMOG(Simple Measure of Gobbledygook)可读性公式指定了六年级的阅读水平。提示 C 要求以六年级的阅读水平重写现有的 PEM。对每份 PEM 进行了可读性(SMOG,FKGL:Flesch-Kincaid Grade Level)、质量(PEMAT:Patient Education Materials Assessment Tool,DISCERN)和准确性(Likert Misinformation scale)评估:所有 LLM 针对提示 A 和 B 生成的 PEM 均为高质量(DISCERN 中位数=4)、易懂(PEMAT 易懂度≥70%)和准确(Likert 分数=1)。LLM 生成的 PEM 不具有可操作性(PEMAT 可操作性结论):LLM(特别是 ChatGPT-4)能够生成和重写可读、准确和高质量的 DED PEM。我们的研究强调了利用 LLM 作为改进 PEM 的辅助工具的价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Leveraging large language models to improve patient education on dry eye disease.

Background/objectives: Dry eye disease (DED) is an exceedingly common diagnosis in patients, yet recent analyses have demonstrated patient education materials (PEMs) on DED to be of low quality and readability. Our study evaluated the utility and performance of three large language models (LLMs) in enhancing and generating new patient education materials (PEMs) on dry eye disease (DED).

Subjects/methods: We evaluated PEMs generated by ChatGPT-3.5, ChatGPT-4, Gemini Advanced, using three separate prompts. Prompts A and B requested they generate PEMs on DED, with Prompt B specifying a 6th-grade reading level, using the SMOG (Simple Measure of Gobbledygook) readability formula. Prompt C asked for a rewrite of existing PEMs at a 6th-grade reading level. Each PEM was assessed on readability (SMOG, FKGL: Flesch-Kincaid Grade Level), quality (PEMAT: Patient Education Materials Assessment Tool, DISCERN), and accuracy (Likert Misinformation scale).

Results: All LLM-generated PEMs in response to Prompt A and B were of high quality (median DISCERN = 4), understandable (PEMAT understandability ≥70%) and accurate (Likert Score=1). LLM-generated PEMs were not actionable (PEMAT Actionability <70%). ChatGPT-4 and Gemini Advanced rewrote existing PEMs (Prompt C) from a baseline readability level (FKGL: 8.0 ± 2.4, SMOG: 7.9 ± 1.7) to targeted 6th-grade reading level; rewrites contained little to no misinformation (median Likert misinformation=1 (range: 1-2)). However, only ChatGPT-4 rewrote PEMs while maintaining high quality and reliability (median DISCERN = 4).

Conclusion: LLMs (notably ChatGPT-4) were able to generate and rewrite PEMs on DED that were readable, accurate, and high quality. Our study underscores the value of leveraging LLMs as supplementary tools to improving PEMs.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Eye
Eye 医学-眼科学
CiteScore
6.40
自引率
5.10%
发文量
481
审稿时长
3-6 weeks
期刊介绍: Eye seeks to provide the international practising ophthalmologist with high quality articles, of academic rigour, on the latest global clinical and laboratory based research. Its core aim is to advance the science and practice of ophthalmology with the latest clinical- and scientific-based research. Whilst principally aimed at the practising clinician, the journal contains material of interest to a wider readership including optometrists, orthoptists, other health care professionals and research workers in all aspects of the field of visual science worldwide. Eye is the official journal of The Royal College of Ophthalmologists. Eye encourages the submission of original articles covering all aspects of ophthalmology including: external eye disease; oculo-plastic surgery; orbital and lacrimal disease; ocular surface and corneal disorders; paediatric ophthalmology and strabismus; glaucoma; medical and surgical retina; neuro-ophthalmology; cataract and refractive surgery; ocular oncology; ophthalmic pathology; ophthalmic genetics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信