Implementing Generative AI to Enhance Patient Education on Retinopathy of Prematurity.

IF 1 4区 医学 Q4 OPHTHALMOLOGY
Qais A Dihan, Andrew D Brown, Ana T Zaldivar, Kendall E Montgomery, Muhammad Z Chauhan, Seif E Abdelnaem, Arsalan A Ali, Sayena Jabbehdari, Amr Azzam, Ahmed B Sallam, Abdelrahman M Elhusseiny
{"title":"Implementing Generative AI to Enhance Patient Education on Retinopathy of Prematurity.","authors":"Qais A Dihan, Andrew D Brown, Ana T Zaldivar, Kendall E Montgomery, Muhammad Z Chauhan, Seif E Abdelnaem, Arsalan A Ali, Sayena Jabbehdari, Amr Azzam, Ahmed B Sallam, Abdelrahman M Elhusseiny","doi":"10.3928/01913913-20250515-01","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate the efficacy of large language models (LLMs) in generating patient education materials (PEMs) on retinopathy of prematurity (ROP).</p><p><strong>Methods: </strong>ChatGPT-3.5 (OpenAI), ChatGPT-4 (OpenAI), and Gemini (Google AI) were compared on three separate prompts. Prompt A requested that each LLM generate a novel PEM on ROP. Prompt B requested generated PEMs at the 6th-grade reading level using the validated Simple Measure of Gobbledygook (SMOG) readability formula. Prompt C requested LLMs improve the readability of existing, human-written PEMs to a 6th-grade reading level. PEMs inserted into Prompt C were sourced through a Google search of \"retinopathy of prematurity.\" Each PEM was analyzed for readability (SMOG, Flesch-Kincaid Grade Level [FKGL]), quality (Patient Education Materials Assessment Tool [PEMAT], DISCERN), and accuracy (Likert Misinformation Scale).</p><p><strong>Results: </strong>LLM-generated PEMs were of high quality (median DISCERN = 4), understandable (PEMAT-U ≥ 70%), and accurate (Likert = 1). Prompt B generated more readable PEMs than Prompt A (<i>P</i> < .001). ChatGPT-4 and Gemini rewrote PEMs (Prompt C) from a baseline readability level (FKGL: 8.8 ± 1.9, SMOG: 8.6 ± 1.5) to the targeted 6th-grade reading level. Only ChatGPT-4 rewrites maintained high quality and reliability (median DISCERN = 4).</p><p><strong>Conclusions: </strong>LLMs, particularly ChatGPT-4, can serve as strong supplementary tools to automate the process of generating readable and high-quality PEMs for parents on ROP. <b>[<i>J Pediatr Ophthalmol Strabismus</i>. 20XX;X(X):XXX-XXX.]</b>.</p>","PeriodicalId":50095,"journal":{"name":"Journal of Pediatric Ophthalmology & Strabismus","volume":" ","pages":"1-10"},"PeriodicalIF":1.0000,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pediatric Ophthalmology & Strabismus","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3928/01913913-20250515-01","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: To evaluate the efficacy of large language models (LLMs) in generating patient education materials (PEMs) on retinopathy of prematurity (ROP).

Methods: ChatGPT-3.5 (OpenAI), ChatGPT-4 (OpenAI), and Gemini (Google AI) were compared on three separate prompts. Prompt A requested that each LLM generate a novel PEM on ROP. Prompt B requested generated PEMs at the 6th-grade reading level using the validated Simple Measure of Gobbledygook (SMOG) readability formula. Prompt C requested LLMs improve the readability of existing, human-written PEMs to a 6th-grade reading level. PEMs inserted into Prompt C were sourced through a Google search of "retinopathy of prematurity." Each PEM was analyzed for readability (SMOG, Flesch-Kincaid Grade Level [FKGL]), quality (Patient Education Materials Assessment Tool [PEMAT], DISCERN), and accuracy (Likert Misinformation Scale).

Results: LLM-generated PEMs were of high quality (median DISCERN = 4), understandable (PEMAT-U ≥ 70%), and accurate (Likert = 1). Prompt B generated more readable PEMs than Prompt A (P < .001). ChatGPT-4 and Gemini rewrote PEMs (Prompt C) from a baseline readability level (FKGL: 8.8 ± 1.9, SMOG: 8.6 ± 1.5) to the targeted 6th-grade reading level. Only ChatGPT-4 rewrites maintained high quality and reliability (median DISCERN = 4).

Conclusions: LLMs, particularly ChatGPT-4, can serve as strong supplementary tools to automate the process of generating readable and high-quality PEMs for parents on ROP. [J Pediatr Ophthalmol Strabismus. 20XX;X(X):XXX-XXX.].

运用生成式人工智能加强早产儿视网膜病变患者教育。
目的:评价大语言模型(LLMs)在早产儿视网膜病变(ROP)患者教育材料(PEMs)生成中的效果。方法:ChatGPT-3.5 (OpenAI)、ChatGPT-4 (OpenAI)和Gemini(谷歌AI)在三个单独的提示符上进行比较。提示A要求每个LLM在ROP上生成一个新的PEM。提示B要求使用经过验证的简单测量的Gobbledygook (SMOG)可读性公式生成六年级阅读水平的pem。提示C要求法学硕士将现有的人工写的PEMs的可读性提高到六年级的阅读水平。插入提示C的pms是通过谷歌搜索“早产儿视网膜病变”获得的。分析每个PEM的可读性(SMOG, Flesch-Kincaid Grade Level [FKGL])、质量(Patient Education Materials Assessment Tool [PEMAT], DISCERN)和准确性(Likert Misinformation Scale)。结果:llm生成的PEMs质量高(中位数DISCERN = 4),可理解(PEMAT-U≥70%),准确(Likert = 1)。提示符B比提示符A生成更多可读的pem (P < 0.001)。ChatGPT-4和Gemini将PEMs (Prompt C)从基线可读性水平(FKGL: 8.8±1.9,SMOG: 8.6±1.5)重写为目标6年级阅读水平。只有ChatGPT-4重写保持了高质量和可靠性(中位数辨别= 4)。结论:llm,特别是ChatGPT-4,可以作为强大的辅助工具,为ROP父母自动生成可读的高质量PEMs。[J].儿童眼斜视,2009;X(X):XXX-XXX。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.80
自引率
8.30%
发文量
115
审稿时长
>12 weeks
期刊介绍: The Journal of Pediatric Ophthalmology & Strabismus is a bimonthly peer-reviewed publication for pediatric ophthalmologists. The Journal has published original articles on the diagnosis, treatment, and prevention of eye disorders in the pediatric age group and the treatment of strabismus in all age groups for over 50 years.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信