Performance of popular large language models in glaucoma patient education: A randomized controlled study

IF 3.4
Yuyu Cao , Wei Lu , Runhan Shi , Fuying Liu , Steven Liu , Xinwei Xu , Jin Yang , Guangyu Rong , Changchang Xin , Xujiao Zhou , Xinghuai Sun , Jiaxu Hong
{"title":"Performance of popular large language models in glaucoma patient education: A randomized controlled study","authors":"Yuyu Cao ,&nbsp;Wei Lu ,&nbsp;Runhan Shi ,&nbsp;Fuying Liu ,&nbsp;Steven Liu ,&nbsp;Xinwei Xu ,&nbsp;Jin Yang ,&nbsp;Guangyu Rong ,&nbsp;Changchang Xin ,&nbsp;Xujiao Zhou ,&nbsp;Xinghuai Sun ,&nbsp;Jiaxu Hong","doi":"10.1016/j.aopr.2024.12.002","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><div>The advent of chatbots based on large language models (LLMs), such as ChatGPT, has significantly transformed knowledge acquisition. However, the application of LLMs in glaucoma patient education remains elusive. In this study, we comprehensively compared the performance of four common LLMs – Qwen, Baichuan 2, ChatGPT-4.0, and PaLM 2 – in the context of glaucoma patient education.</div></div><div><h3>Methods</h3><div>Initially, senior ophthalmologists were asked with scoring responses generated by the LLMs, which were answers to the most frequent glaucoma-related questions posed by patients. The Chinese Readability Platform was employed to assess the recommended reading age and reading difficulty score of the four LLMs. Subsequently, optimized models were filtered, and 29 glaucoma patients participated in posing questions to the chatbots and scoring the answers within a real-world clinical setting. Attending ophthalmologists were also required to score the answers across five dimensions: correctness, completeness, readability, helpfulness, and safety. Patients, on the other hand, scored the answers based on three dimensions: satisfaction, readability, and helpfulness.</div></div><div><h3>Results</h3><div>In the first stage, Baichuan 2 and ChatGPT-4.0 outperformed the other two models, though ChatGPT-4.0 had higher recommended reading age and reading difficulty scores. In the second stage, both Baichuan 2 and ChatGPT-4.0 demonstrated exceptional performance among patients and ophthalmologists, with no statistically significant differences observed.</div></div><div><h3>Conclusions</h3><div>Our research identifies Baichuan 2 and ChatGPT-4.0 as prominent LLMs, offering viable options for glaucoma education.</div></div>","PeriodicalId":72103,"journal":{"name":"Advances in ophthalmology practice and research","volume":"5 2","pages":"Pages 88-94"},"PeriodicalIF":3.4000,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in ophthalmology practice and research","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667376224000738","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

The advent of chatbots based on large language models (LLMs), such as ChatGPT, has significantly transformed knowledge acquisition. However, the application of LLMs in glaucoma patient education remains elusive. In this study, we comprehensively compared the performance of four common LLMs – Qwen, Baichuan 2, ChatGPT-4.0, and PaLM 2 – in the context of glaucoma patient education.

Methods

Initially, senior ophthalmologists were asked with scoring responses generated by the LLMs, which were answers to the most frequent glaucoma-related questions posed by patients. The Chinese Readability Platform was employed to assess the recommended reading age and reading difficulty score of the four LLMs. Subsequently, optimized models were filtered, and 29 glaucoma patients participated in posing questions to the chatbots and scoring the answers within a real-world clinical setting. Attending ophthalmologists were also required to score the answers across five dimensions: correctness, completeness, readability, helpfulness, and safety. Patients, on the other hand, scored the answers based on three dimensions: satisfaction, readability, and helpfulness.

Results

In the first stage, Baichuan 2 and ChatGPT-4.0 outperformed the other two models, though ChatGPT-4.0 had higher recommended reading age and reading difficulty scores. In the second stage, both Baichuan 2 and ChatGPT-4.0 demonstrated exceptional performance among patients and ophthalmologists, with no statistically significant differences observed.

Conclusions

Our research identifies Baichuan 2 and ChatGPT-4.0 as prominent LLMs, offering viable options for glaucoma education.
流行的大型语言模型在青光眼患者教育中的表现:一项随机对照研究
基于大型语言模型(llm)的聊天机器人的出现,如ChatGPT,极大地改变了知识获取。然而,法学硕士在青光眼患者教育中的应用仍然难以捉摸。在本研究中,我们综合比较了四种常见的llm - Qwen,百川2,ChatGPT-4.0和PaLM 2 -在青光眼患者教育背景下的表现。方法最初,资深眼科医生接受了由LLMs生成的评分回答,这些回答是患者提出的最常见的青光眼相关问题的答案。采用中文可读性平台评估4位法学硕士的推荐阅读年龄和阅读难度得分。随后,优化的模型被过滤,29名青光眼患者参与向聊天机器人提出问题,并在现实世界的临床环境中对答案进行评分。主治眼科医生还被要求从五个方面对答案进行评分:正确性、完整性、可读性、有用性和安全性。另一方面,患者则根据满意度、可读性和帮助度这三个维度来打分。结果在第一阶段,“白川2”和“ChatGPT-4.0”表现优于其他两种模型,但“ChatGPT-4.0”的推荐阅读年龄和阅读难度得分更高。在第二阶段,白川2号和ChatGPT-4.0在患者和眼科医生中表现优异,差异无统计学意义。结论我们的研究认为,白川2号和ChatGPT-4.0是青光眼教育的优秀法学硕士,为青光眼教育提供了可行的选择。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.70
自引率
0.00%
发文量
0
审稿时长
66 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信