Michael Balas, Efrem D Mandelcorn, Peng Yan, Edsel B Ing, Sean A Crawford, Parnian Arjmand
{"title":"ChatGPT and retinal disease: a cross-sectional study on AI comprehension of clinical guidelines.","authors":"Michael Balas, Efrem D Mandelcorn, Peng Yan, Edsel B Ing, Sean A Crawford, Parnian Arjmand","doi":"10.1016/j.jcjo.2024.06.001","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To evaluate the performance of an artificial intelligence (AI) large language model, ChatGPT (version 4.0), for common retinal diseases, in accordance with the American Academy of Ophthalmology (AAO) Preferred Practice Pattern (PPP) guidelines.</p><p><strong>Design: </strong>A cross-sectional survey study design was employed to compare the responses made by ChatGPT to established clinical guidelines.</p><p><strong>Participants: </strong>Responses by the AI were reviewed by a panel of three vitreoretinal specialists for evaluation.</p><p><strong>Methods: </strong>To investigate ChatGPT's comprehension of clinical guidelines, we designed 130 questions covering a broad spectrum of topics within 12 AAO PPP domains of retinal disease These questions were crafted to encompass diagnostic criteria, treatment guidelines, and management strategies, including both medical and surgical aspects of retinal care. A panel of 3 retinal specialists independently evaluated responses on a Likert scale from 1 to 5 based on their relevance, accuracy, and adherence to AAO PPP guidelines. Response readability was evaluated using Flesch Readability Ease and Flesch-Kincaid grade level scores.</p><p><strong>Results: </strong>ChatGPT achieved an overall average score of 4.9/5.0, suggesting high alignment with the AAO PPP guidelines. Scores varied across domains, with the lowest in the surgical management of disease. The responses had a low reading ease score and required a college-to-graduate level of comprehension. Identified errors were related to diagnostic criteria, treatment options, and methodological procedures.</p><p><strong>Conclusion: </strong>ChatGPT 4.0 demonstrated significant potential in generating guideline-concordant responses, particularly for common medical retinal diseases. However, its performance slightly decreased in surgical retina, highlighting the ongoing need for clinician input, further model refinement, and improved comprehensibility.</p>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.jcjo.2024.06.001","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: To evaluate the performance of an artificial intelligence (AI) large language model, ChatGPT (version 4.0), for common retinal diseases, in accordance with the American Academy of Ophthalmology (AAO) Preferred Practice Pattern (PPP) guidelines.
Design: A cross-sectional survey study design was employed to compare the responses made by ChatGPT to established clinical guidelines.
Participants: Responses by the AI were reviewed by a panel of three vitreoretinal specialists for evaluation.
Methods: To investigate ChatGPT's comprehension of clinical guidelines, we designed 130 questions covering a broad spectrum of topics within 12 AAO PPP domains of retinal disease These questions were crafted to encompass diagnostic criteria, treatment guidelines, and management strategies, including both medical and surgical aspects of retinal care. A panel of 3 retinal specialists independently evaluated responses on a Likert scale from 1 to 5 based on their relevance, accuracy, and adherence to AAO PPP guidelines. Response readability was evaluated using Flesch Readability Ease and Flesch-Kincaid grade level scores.
Results: ChatGPT achieved an overall average score of 4.9/5.0, suggesting high alignment with the AAO PPP guidelines. Scores varied across domains, with the lowest in the surgical management of disease. The responses had a low reading ease score and required a college-to-graduate level of comprehension. Identified errors were related to diagnostic criteria, treatment options, and methodological procedures.
Conclusion: ChatGPT 4.0 demonstrated significant potential in generating guideline-concordant responses, particularly for common medical retinal diseases. However, its performance slightly decreased in surgical retina, highlighting the ongoing need for clinician input, further model refinement, and improved comprehensibility.