Utilizing AI-Generated Plain Language Summaries to Enhance Interdisciplinary Understanding of Ophthalmology Notes: A Randomized Trial

Prashant D. Tailor, Haley S. D'Souza, Clara Castillejo Becerra, Heidi M. Dahl, Neil R. Patel, Tyler M. Kaplan, Darrell Kohli, Erick D. Bothun, Brian G. Mohney, Andrea A. Tooley, Keith H. Baratz, Raymond Iezzi, Andrew J. Barkmeier, Sophie J. Bakri, Gavin W. Roddy, David Hodge, Arthur J. Sit, Matthew R. Starr, John J. Chen
{"title":"Utilizing AI-Generated Plain Language Summaries to Enhance Interdisciplinary Understanding of Ophthalmology Notes: A Randomized Trial","authors":"Prashant D. Tailor, Haley S. D'Souza, Clara Castillejo Becerra, Heidi M. Dahl, Neil R. Patel, Tyler M. Kaplan, Darrell Kohli, Erick D. Bothun, Brian G. Mohney, Andrea A. Tooley, Keith H. Baratz, Raymond Iezzi, Andrew J. Barkmeier, Sophie J. Bakri, Gavin W. Roddy, David Hodge, Arthur J. Sit, Matthew R. Starr, John J. Chen","doi":"10.1101/2024.09.12.24313551","DOIUrl":null,"url":null,"abstract":"Background Specialized terminology employed by ophthalmologists creates a comprehension barrier for non-ophthalmology providers, compromising interdisciplinary communication and patient care. Current solutions such as manual note simplification are impractical or inadequate. Large language models (LLMs) present a potential low-burden approach to translating ophthalmology documentation into accessible language. Methods This prospective, randomized trial evaluated the addition of LLM-generated plain language summaries (PLSs) to standard ophthalmology notes (SONs). Participants included non-ophthalmology providers and ophthalmologists. The study assessed: (1) non-ophthalmology providers' comprehension and satisfaction with either the SON (control) or SON+PLS (intervention), (2) ophthalmologists' evaluation of PLS accuracy, safety, and time burden, and (3) objective semantic and linguistic quality of PLSs. Results 85% of non-ophthalmology providers (n=362, 33% response rate) preferred the PLS to SON. Non-ophthalmology providers reported enhanced diagnostic understanding (p=0.012), increased note detail satisfaction (p<0.001), and improved explanation clarity (p<0.001) for notes containing a PLS. The addition of a PLS narrowed comprehension gaps between providers who were comfortable and uncomfortable with ophthalmology terminology at baseline (intergroup difference p<0.001 to p>0.05). PLS semantic analysis demonstrated high meaning preservation (BERTScore mean F1 score: 0.85) with greater readability (Flesch Reading Ease: 51.8 vs. 43.6, Flesch-Kincaid Grade Level: 10.7 vs. 11.9). Ophthalmologists (n=489, 84% response rate) reported high PLS accuracy (90% \"a great deal\") with minimal review time burden (94.9% ≤ 1 minute). PLS error rate on initial ophthalmologist review and editing was 26%, and 15% on independent ophthalmologist over-read of edited PLSs. 84.9% of identified errors were deemed low risk for patient harm and 0% had a risk of severe harm/death. Conclusions LLM-generated plain language summaries enhance accessibility and utility of ophthalmology notes for non-ophthalmology providers while maintaining high semantic fidelity and improving readability. PLS error rates underscore the need for careful implementation and ongoing safety monitoring in clinical practice.","PeriodicalId":501390,"journal":{"name":"medRxiv - Ophthalmology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Ophthalmology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.09.12.24313551","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background Specialized terminology employed by ophthalmologists creates a comprehension barrier for non-ophthalmology providers, compromising interdisciplinary communication and patient care. Current solutions such as manual note simplification are impractical or inadequate. Large language models (LLMs) present a potential low-burden approach to translating ophthalmology documentation into accessible language. Methods This prospective, randomized trial evaluated the addition of LLM-generated plain language summaries (PLSs) to standard ophthalmology notes (SONs). Participants included non-ophthalmology providers and ophthalmologists. The study assessed: (1) non-ophthalmology providers' comprehension and satisfaction with either the SON (control) or SON+PLS (intervention), (2) ophthalmologists' evaluation of PLS accuracy, safety, and time burden, and (3) objective semantic and linguistic quality of PLSs. Results 85% of non-ophthalmology providers (n=362, 33% response rate) preferred the PLS to SON. Non-ophthalmology providers reported enhanced diagnostic understanding (p=0.012), increased note detail satisfaction (p<0.001), and improved explanation clarity (p<0.001) for notes containing a PLS. The addition of a PLS narrowed comprehension gaps between providers who were comfortable and uncomfortable with ophthalmology terminology at baseline (intergroup difference p<0.001 to p>0.05). PLS semantic analysis demonstrated high meaning preservation (BERTScore mean F1 score: 0.85) with greater readability (Flesch Reading Ease: 51.8 vs. 43.6, Flesch-Kincaid Grade Level: 10.7 vs. 11.9). Ophthalmologists (n=489, 84% response rate) reported high PLS accuracy (90% "a great deal") with minimal review time burden (94.9% ≤ 1 minute). PLS error rate on initial ophthalmologist review and editing was 26%, and 15% on independent ophthalmologist over-read of edited PLSs. 84.9% of identified errors were deemed low risk for patient harm and 0% had a risk of severe harm/death. Conclusions LLM-generated plain language summaries enhance accessibility and utility of ophthalmology notes for non-ophthalmology providers while maintaining high semantic fidelity and improving readability. PLS error rates underscore the need for careful implementation and ongoing safety monitoring in clinical practice.
利用人工智能生成的通俗语言摘要加强对眼科笔记的跨学科理解:随机试验
背景 眼科医师使用的专业术语给非眼科医疗人员造成了理解障碍,影响了跨学科交流和患者护理。目前的解决方案(如简化人工注释)不切实际或不充分。大语言模型 (LLM) 是一种潜在的低负担方法,可将眼科文件翻译成易于理解的语言。方法 该前瞻性随机试验评估了在标准眼科笔记 (SON) 中添加由 LLM 生成的浅显语言摘要 (PLS)的情况。参与者包括非眼科医生和眼科医生。研究评估了:(1) 非眼科医疗人员对 SON(对照组)或 SON+PLS(干预组)的理解力和满意度;(2) 眼科医师对 PLS 准确性、安全性和时间负担的评价;(3) PLS 的客观语义和语言质量。结果 85% 的非眼科医疗机构(n=362,回复率 33%)认为 PLS 优于 SON。非眼科医疗服务提供者报告称,包含 PLS 的笔记可增强对诊断的理解(p=0.012),提高对笔记细节的满意度(p<0.001),并改善解释的清晰度(p<0.001)。添加 PLS 后,基线时对眼科术语感到满意和不满意的医疗服务提供者之间的理解差距缩小了(组间差异 p<0.001 至 p>0.05)。PLS 语义分析表明,术语的意义保留度高(BERTScore 平均 F1 分数:0.85),可读性更高(Flesch 阅读容易度:51.8 对 43.6,Flesch-Kincaid 等级:10.7 对 11.9)。眼科医生(人数=489,回复率 84%)报告的 PLS 准确率很高(90%"非常高"),审核时间负担极小(94.9% ≤ 1 分钟)。眼科医生初步审阅和编辑的 PLS 错误率为 26%,独立眼科医生对编辑过的 PLS 过度审阅的错误率为 15%。84.9%的已识别错误被认为对患者造成伤害的风险较低,0%有严重伤害/死亡风险。结论 LLM 生成的浅显语言摘要提高了非眼科医疗人员对眼科笔记的可及性和实用性,同时保持了较高的语义保真度并改善了可读性。PLS 的错误率强调了在临床实践中谨慎实施和持续安全监控的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信