Evaluation of large language models for providing educational information in orthokeratology care

IF 4.1 3区 医学 Q1 OPHTHALMOLOGY
Yangyi Huang , Runhan Shi , Can Chen , Xueyi Zhou , Xingtao Zhou , Jiaxu Hong , Zhi Chen
{"title":"Evaluation of large language models for providing educational information in orthokeratology care","authors":"Yangyi Huang ,&nbsp;Runhan Shi ,&nbsp;Can Chen ,&nbsp;Xueyi Zhou ,&nbsp;Xingtao Zhou ,&nbsp;Jiaxu Hong ,&nbsp;Zhi Chen","doi":"10.1016/j.clae.2025.102384","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Large language models (LLMs) are gaining popularity in solving ophthalmic problems. However, their efficacy in patient education regarding orthokeratology, one of the main myopia control strategies, has yet to be determined.</div></div><div><h3>Methods</h3><div>This cross-sectional study established a question bank consisting of 24 orthokeratology-related questions used as queries for GTP-4, Qwen-72B, and Yi-34B to prompt responses in Chinese. Objective evaluations were conducted using an online platform. Subjective evaluations including correctness, relevance, readability, applicability, safety, clarity, helpfulness, and satisfaction were performed by experienced ophthalmologists and parents of myopic children using a 5-point Likert scale. The overall standardized scores were also calculated.</div></div><div><h3>Results</h3><div>The word count of the responses from Qwen-72B (199.42 ± 76.82) was the lowest (<em>P</em> &lt; 0.001), with no significant differences in recommended age among the LLMs. GPT-4 (3.79 ± 1.03) scored lower in readability than Yi-34B (4.65 ± 0.51) and Qwen-72B (4.65 ± 0.61) (<em>P</em> &lt; 0.001). No significant differences in safety, relevance, correctness, and applicability were observed across the three LLMs. Parental evaluations rated all LLMs an average score exceeding 4.7 points, with GPT-4 outperforming the others in helpfulness (<em>P</em> = 0.004) and satisfaction (<em>P</em> = 0.016). Qwen-72B’s overall standardized scores surpassed those of the other two LLMs (<em>P</em> = 0.048).</div></div><div><h3>Conclusions</h3><div>GPT-4 and the Chinese LLM Qwen-72B produced accurate and beneficial responses to inquiries on orthokeratology. Further enhancement to bolster precision is essential, particularly within diverse linguistic contexts.</div></div>","PeriodicalId":49087,"journal":{"name":"Contact Lens & Anterior Eye","volume":"48 3","pages":"Article 102384"},"PeriodicalIF":4.1000,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Contact Lens & Anterior Eye","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1367048425000189","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Large language models (LLMs) are gaining popularity in solving ophthalmic problems. However, their efficacy in patient education regarding orthokeratology, one of the main myopia control strategies, has yet to be determined.

Methods

This cross-sectional study established a question bank consisting of 24 orthokeratology-related questions used as queries for GTP-4, Qwen-72B, and Yi-34B to prompt responses in Chinese. Objective evaluations were conducted using an online platform. Subjective evaluations including correctness, relevance, readability, applicability, safety, clarity, helpfulness, and satisfaction were performed by experienced ophthalmologists and parents of myopic children using a 5-point Likert scale. The overall standardized scores were also calculated.

Results

The word count of the responses from Qwen-72B (199.42 ± 76.82) was the lowest (P < 0.001), with no significant differences in recommended age among the LLMs. GPT-4 (3.79 ± 1.03) scored lower in readability than Yi-34B (4.65 ± 0.51) and Qwen-72B (4.65 ± 0.61) (P < 0.001). No significant differences in safety, relevance, correctness, and applicability were observed across the three LLMs. Parental evaluations rated all LLMs an average score exceeding 4.7 points, with GPT-4 outperforming the others in helpfulness (P = 0.004) and satisfaction (P = 0.016). Qwen-72B’s overall standardized scores surpassed those of the other two LLMs (P = 0.048).

Conclusions

GPT-4 and the Chinese LLM Qwen-72B produced accurate and beneficial responses to inquiries on orthokeratology. Further enhancement to bolster precision is essential, particularly within diverse linguistic contexts.
大型语言模型在角膜塑形术护理中提供教育信息的评估。
背景:大型语言模型(LLMs)在解决眼科问题方面越来越受欢迎。然而,它们在角膜塑形术(近视控制的主要策略之一)患者教育方面的效果尚未确定。方法:本横断面研究建立了一个由24个角膜塑形术相关问题组成的题库,作为对GTP-4、Qwen-72B和Yi-34B的查询,以提示中文回复。使用在线平台进行客观评价。由经验丰富的眼科医生和近视儿童家长采用5分李克特量表进行主观评价,包括正确性、相关性、可读性、适用性、安全性、清晰度、有用性和满意度。还计算了总体标准化分数。结果:qwen72b的字数统计最低(199.42±76.82)(P)。结论:GPT-4和中文LLM qwen72b对角膜塑形术的回答准确、有益。进一步提高准确性是必要的,特别是在不同的语言环境中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.60
自引率
18.80%
发文量
198
审稿时长
55 days
期刊介绍: Contact Lens & Anterior Eye is a research-based journal covering all aspects of contact lens theory and practice, including original articles on invention and innovations, as well as the regular features of: Case Reports; Literary Reviews; Editorials; Instrumentation and Techniques and Dates of Professional Meetings.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信