人工智能衍生的大型语言模型在年龄相关性黄斑变性患者中的应用及准确性

IF 1.9 Q2 OPHTHALMOLOGY
Lorenzo Ferro Desideri, Janice Roth, Martin Zinkernagel, Rodrigo Anguita
{"title":"人工智能衍生的大型语言模型在年龄相关性黄斑变性患者中的应用及准确性","authors":"Lorenzo Ferro Desideri, Janice Roth, Martin Zinkernagel, Rodrigo Anguita","doi":"10.1186/s40942-023-00511-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Age-related macular degeneration (AMD) affects millions of people globally, leading to a surge in online research of putative diagnoses, causing potential misinformation and anxiety in patients and their parents. This study explores the efficacy of artificial intelligence-derived large language models (LLMs) like in addressing AMD patients' questions.</p><p><strong>Methods: </strong>ChatGPT 3.5 (2023), Bing AI (2023), and Google Bard (2023) were adopted as LLMs. Patients' questions were subdivided in two question categories, (a) general medical advice and (b) pre- and post-intravitreal injection advice and classified as (1) accurate and sufficient (2) partially accurate but sufficient and (3) inaccurate and not sufficient. Non-parametric test has been done to compare the means between the 3 LLMs scores and also an analysis of variance and reliability tests were performed among the 3 groups.</p><p><strong>Results: </strong>In category a) of questions, the average score was 1.20 (± 0.41) with ChatGPT 3.5, 1.60 (± 0.63) with Bing AI and 1.60 (± 0.73) with Google Bard, showing no significant differences among the 3 groups (p = 0.129). The average score in category b was 1.07 (± 0.27) with ChatGPT 3.5, 1.69 (± 0.63) with Bing AI and 1.38 (± 0.63) with Google Bard, showing a significant difference among the 3 groups (p = 0.0042). Reliability statistics showed Chronbach's α of 0.237 (range 0.448, 0.096-0.544).</p><p><strong>Conclusion: </strong>ChatGPT 3.5 consistently offered the most accurate and satisfactory responses, particularly with technical queries. While LLMs displayed promise in providing precise information about AMD; however, further improvements are needed especially in more technical questions.</p>","PeriodicalId":14289,"journal":{"name":"International Journal of Retina and Vitreous","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10657493/pdf/","citationCount":"0","resultStr":"{\"title\":\"\\\"Application and accuracy of artificial intelligence-derived large language models in patients with age related macular degeneration\\\".\",\"authors\":\"Lorenzo Ferro Desideri, Janice Roth, Martin Zinkernagel, Rodrigo Anguita\",\"doi\":\"10.1186/s40942-023-00511-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Age-related macular degeneration (AMD) affects millions of people globally, leading to a surge in online research of putative diagnoses, causing potential misinformation and anxiety in patients and their parents. This study explores the efficacy of artificial intelligence-derived large language models (LLMs) like in addressing AMD patients' questions.</p><p><strong>Methods: </strong>ChatGPT 3.5 (2023), Bing AI (2023), and Google Bard (2023) were adopted as LLMs. Patients' questions were subdivided in two question categories, (a) general medical advice and (b) pre- and post-intravitreal injection advice and classified as (1) accurate and sufficient (2) partially accurate but sufficient and (3) inaccurate and not sufficient. Non-parametric test has been done to compare the means between the 3 LLMs scores and also an analysis of variance and reliability tests were performed among the 3 groups.</p><p><strong>Results: </strong>In category a) of questions, the average score was 1.20 (± 0.41) with ChatGPT 3.5, 1.60 (± 0.63) with Bing AI and 1.60 (± 0.73) with Google Bard, showing no significant differences among the 3 groups (p = 0.129). The average score in category b was 1.07 (± 0.27) with ChatGPT 3.5, 1.69 (± 0.63) with Bing AI and 1.38 (± 0.63) with Google Bard, showing a significant difference among the 3 groups (p = 0.0042). Reliability statistics showed Chronbach's α of 0.237 (range 0.448, 0.096-0.544).</p><p><strong>Conclusion: </strong>ChatGPT 3.5 consistently offered the most accurate and satisfactory responses, particularly with technical queries. While LLMs displayed promise in providing precise information about AMD; however, further improvements are needed especially in more technical questions.</p>\",\"PeriodicalId\":14289,\"journal\":{\"name\":\"International Journal of Retina and Vitreous\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2023-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10657493/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Retina and Vitreous\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s40942-023-00511-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Retina and Vitreous","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s40942-023-00511-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

导言:年龄相关性黄斑变性(AMD)影响着全球数百万人,导致对假定诊断的在线研究激增,给患者及其父母带来潜在的错误信息和焦虑。本研究探讨了人工智能衍生的大型语言模型(LLMs)在解决AMD患者问题方面的功效。方法:采用ChatGPT 3.5(2023)、Bing AI(2023)、谷歌Bard(2023)作为LLMs。患者的问题被细分为两个问题类别,(a)一般医疗建议和(b)玻璃体内注射前后的建议,并分为(1)准确和充分(2)部分准确但充分和(3)不准确和不充分。进行了非参数检验以比较3个LLMs分数之间的平均值,并在3组之间进行了方差分析和信度检验。结果:在a类问题中,ChatGPT 3.5组平均得分为1.20(±0.41)分,Bing AI组平均得分为1.60(±0.63)分,b谷歌Bard组平均得分为1.60(±0.73)分,3组间差异无统计学意义(p = 0.129)。ChatGPT 3.5组平均得分为1.07(±0.27)分,Bing AI组平均得分为1.69(±0.63)分,b谷歌Bard组平均得分为1.38(±0.63)分,3组间差异有统计学意义(p = 0.0042)。信度统计结果显示,Chronbach’s α为0.237(范围0.448,0.096 ~ 0.544)。结论:ChatGPT 3.5始终提供最准确和令人满意的响应,特别是在技术查询方面。llm有望提供有关AMD的精确信息;但是,需要进一步改进,特别是在技术性较强的问题上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
"Application and accuracy of artificial intelligence-derived large language models in patients with age related macular degeneration".

Introduction: Age-related macular degeneration (AMD) affects millions of people globally, leading to a surge in online research of putative diagnoses, causing potential misinformation and anxiety in patients and their parents. This study explores the efficacy of artificial intelligence-derived large language models (LLMs) like in addressing AMD patients' questions.

Methods: ChatGPT 3.5 (2023), Bing AI (2023), and Google Bard (2023) were adopted as LLMs. Patients' questions were subdivided in two question categories, (a) general medical advice and (b) pre- and post-intravitreal injection advice and classified as (1) accurate and sufficient (2) partially accurate but sufficient and (3) inaccurate and not sufficient. Non-parametric test has been done to compare the means between the 3 LLMs scores and also an analysis of variance and reliability tests were performed among the 3 groups.

Results: In category a) of questions, the average score was 1.20 (± 0.41) with ChatGPT 3.5, 1.60 (± 0.63) with Bing AI and 1.60 (± 0.73) with Google Bard, showing no significant differences among the 3 groups (p = 0.129). The average score in category b was 1.07 (± 0.27) with ChatGPT 3.5, 1.69 (± 0.63) with Bing AI and 1.38 (± 0.63) with Google Bard, showing a significant difference among the 3 groups (p = 0.0042). Reliability statistics showed Chronbach's α of 0.237 (range 0.448, 0.096-0.544).

Conclusion: ChatGPT 3.5 consistently offered the most accurate and satisfactory responses, particularly with technical queries. While LLMs displayed promise in providing precise information about AMD; however, further improvements are needed especially in more technical questions.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.50
自引率
4.30%
发文量
81
审稿时长
19 weeks
期刊介绍: International Journal of Retina and Vitreous focuses on the ophthalmic subspecialty of vitreoretinal disorders. The journal presents original articles on new approaches to diagnosis, outcomes of clinical trials, innovations in pharmacological therapy and surgical techniques, as well as basic science advances that impact clinical practice. Topical areas include, but are not limited to: -Imaging of the retina, choroid and vitreous -Innovations in optical coherence tomography (OCT) -Small-gauge vitrectomy, retinal detachment, chromovitrectomy -Electroretinography (ERG), microperimetry, other functional tests -Intraocular tumors -Retinal pharmacotherapy & drug delivery -Diabetic retinopathy & other vascular diseases -Age-related macular degeneration (AMD) & other macular entities
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信