利用ChatGPT-3.5辅助眼科医生进行临床决策。

IF 1.5 Q3 OPHTHALMOLOGY
Journal of Ophthalmic & Vision Research Pub Date : 2025-05-05 eCollection Date: 2025-01-01 DOI:10.18502/jovr.v20.14692
Samir Cayenne, Natalia Penaloza, Anne C Chan, M I Tahashilder, Rodney C Guiseppi, Touka Banaee
{"title":"利用ChatGPT-3.5辅助眼科医生进行临床决策。","authors":"Samir Cayenne, Natalia Penaloza, Anne C Chan, M I Tahashilder, Rodney C Guiseppi, Touka Banaee","doi":"10.18502/jovr.v20.14692","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>ChatGPT-3.5 has the potential to assist ophthalmologists by generating a differential diagnosis based on patient presentation.</p><p><strong>Methods: </strong>One hundred ocular pathologies were tested. Each pathology had two signs and two symptoms prompted into ChatGPT-3.5 through a clinical vignette template to generate a list of four preferentially ordered differential diagnoses, denoted as Method A. Thirty of the original 100 pathologies were further subcategorized into three groups of 10: cornea, retina, and neuro-ophthalmology. To assess whether additional clinical information affected the accuracy of results, these subcategories were again prompted into ChatGPT-3.5 with the same previous two signs and symptoms, along with additional risk factors of age, sex, and past medical history, denoted as Method B. A one-tailed Wilcoxon signed-rank test was performed to compare the accuracy between Methods A and B across each subcategory (significance indicated by <i>P</i> <math><mo><</mo></math> 0.05).</p><p><strong>Results: </strong>ChatGPT-3.5 correctly diagnosed 51 out of 100 cases (51.00%) as its first differential diagnosis and 18 out of 100 cases (18.00%) as a differential other than its first diagnosis. However, 31 out of 100 cases (31.00%) were not included in the differential diagnosis list. Only the subcategory of neuro-ophthalmology showed a significant increase in accuracy (<i>P</i> = 0.01) when prompted with the additional risk factors (Method B) compared to only two signs and two symptoms (Method A).</p><p><strong>Conclusion: </strong>These results demonstrate that ChatGPT-3.5 may help assist clinicians in suggesting possible diagnoses based on varying complex clinical information. However, its accuracy is limited, and it cannot be utilized as a replacement for clinical decision-making.</p>","PeriodicalId":16586,"journal":{"name":"Journal of Ophthalmic & Vision Research","volume":"20 ","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12257982/pdf/","citationCount":"0","resultStr":"{\"title\":\"Utilizing ChatGPT-3.5 to Assist Ophthalmologists in Clinical Decision-making.\",\"authors\":\"Samir Cayenne, Natalia Penaloza, Anne C Chan, M I Tahashilder, Rodney C Guiseppi, Touka Banaee\",\"doi\":\"10.18502/jovr.v20.14692\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>ChatGPT-3.5 has the potential to assist ophthalmologists by generating a differential diagnosis based on patient presentation.</p><p><strong>Methods: </strong>One hundred ocular pathologies were tested. Each pathology had two signs and two symptoms prompted into ChatGPT-3.5 through a clinical vignette template to generate a list of four preferentially ordered differential diagnoses, denoted as Method A. Thirty of the original 100 pathologies were further subcategorized into three groups of 10: cornea, retina, and neuro-ophthalmology. To assess whether additional clinical information affected the accuracy of results, these subcategories were again prompted into ChatGPT-3.5 with the same previous two signs and symptoms, along with additional risk factors of age, sex, and past medical history, denoted as Method B. A one-tailed Wilcoxon signed-rank test was performed to compare the accuracy between Methods A and B across each subcategory (significance indicated by <i>P</i> <math><mo><</mo></math> 0.05).</p><p><strong>Results: </strong>ChatGPT-3.5 correctly diagnosed 51 out of 100 cases (51.00%) as its first differential diagnosis and 18 out of 100 cases (18.00%) as a differential other than its first diagnosis. However, 31 out of 100 cases (31.00%) were not included in the differential diagnosis list. Only the subcategory of neuro-ophthalmology showed a significant increase in accuracy (<i>P</i> = 0.01) when prompted with the additional risk factors (Method B) compared to only two signs and two symptoms (Method A).</p><p><strong>Conclusion: </strong>These results demonstrate that ChatGPT-3.5 may help assist clinicians in suggesting possible diagnoses based on varying complex clinical information. However, its accuracy is limited, and it cannot be utilized as a replacement for clinical decision-making.</p>\",\"PeriodicalId\":16586,\"journal\":{\"name\":\"Journal of Ophthalmic & Vision Research\",\"volume\":\"20 \",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-05-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12257982/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Ophthalmic & Vision Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.18502/jovr.v20.14692\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q3\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Ophthalmic & Vision Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18502/jovr.v20.14692","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:ChatGPT-3.5有可能帮助眼科医生根据患者的表现进行鉴别诊断。方法:对100例眼部病理检查。每种病理都有两个体征和两个症状,通过临床小片段模板提示到ChatGPT-3.5中,以生成四种优先排序的鉴别诊断列表,称为方法a。原始100种病理中的30种进一步细分为三组,每组10种:角膜、视网膜和神经眼科。为了评估额外的临床信息是否会影响结果的准确性,这些子类别再次被提示到ChatGPT-3.5中,具有相同的前两个体征和症状,以及年龄、性别和既往病史等额外的危险因素,记为方法B。进行单侧Wilcoxon符号秩检验,比较方法A和B在每个子类别中的准确性(P < 0.05)。结果:ChatGPT-3.5的首次鉴别诊断正确率为51 / 100(51.00%),非首次鉴别诊断正确率为18 / 100(18.00%)。然而,100例病例中有31例(31.00%)未列入鉴别诊断清单。与仅有两种体征和两种症状(方法a)相比,当提示有附加危险因素(方法B)时,只有神经眼科亚类别的准确性有显著提高(P = 0.01)。结论:这些结果表明,ChatGPT-3.5可以帮助临床医生根据各种复杂的临床信息提出可能的诊断。然而,它的准确性是有限的,不能作为临床决策的替代。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Utilizing ChatGPT-3.5 to Assist Ophthalmologists in Clinical Decision-making.

Purpose: ChatGPT-3.5 has the potential to assist ophthalmologists by generating a differential diagnosis based on patient presentation.

Methods: One hundred ocular pathologies were tested. Each pathology had two signs and two symptoms prompted into ChatGPT-3.5 through a clinical vignette template to generate a list of four preferentially ordered differential diagnoses, denoted as Method A. Thirty of the original 100 pathologies were further subcategorized into three groups of 10: cornea, retina, and neuro-ophthalmology. To assess whether additional clinical information affected the accuracy of results, these subcategories were again prompted into ChatGPT-3.5 with the same previous two signs and symptoms, along with additional risk factors of age, sex, and past medical history, denoted as Method B. A one-tailed Wilcoxon signed-rank test was performed to compare the accuracy between Methods A and B across each subcategory (significance indicated by P < 0.05).

Results: ChatGPT-3.5 correctly diagnosed 51 out of 100 cases (51.00%) as its first differential diagnosis and 18 out of 100 cases (18.00%) as a differential other than its first diagnosis. However, 31 out of 100 cases (31.00%) were not included in the differential diagnosis list. Only the subcategory of neuro-ophthalmology showed a significant increase in accuracy (P = 0.01) when prompted with the additional risk factors (Method B) compared to only two signs and two symptoms (Method A).

Conclusion: These results demonstrate that ChatGPT-3.5 may help assist clinicians in suggesting possible diagnoses based on varying complex clinical information. However, its accuracy is limited, and it cannot be utilized as a replacement for clinical decision-making.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.60
自引率
0.00%
发文量
63
审稿时长
30 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信