Jimmy S. Chen MD , Akshay J. Reddy BS , Eman Al-Sharif MD , Marissa K. Shoji MD , Fritz Gerald P. Kalaw MD , Medi Eslani MD , Paul Z. Lang MD , Malvika Arya MD , Zachary A. Koretz MD, MPH , Kyle A. Bolo MD , Justin J. Arnett MD , Aliya C. Roginiel MD, MPH , Jiun L. Do MD, PhD , Shira L. Robbins MD , Andrew S. Camp MD , Nathan L. Scott MD , Jolene C. Rudell MD, PhD , Robert N. Weinreb MD , Sally L. Baxter MD, MSc , David B. Granet MD, MHCM
{"title":"分析 ChatGPT 对眼科病例的反应:ChatGPT 能否像眼科医生一样思考?","authors":"Jimmy S. Chen MD , Akshay J. Reddy BS , Eman Al-Sharif MD , Marissa K. Shoji MD , Fritz Gerald P. Kalaw MD , Medi Eslani MD , Paul Z. Lang MD , Malvika Arya MD , Zachary A. Koretz MD, MPH , Kyle A. Bolo MD , Justin J. Arnett MD , Aliya C. Roginiel MD, MPH , Jiun L. Do MD, PhD , Shira L. Robbins MD , Andrew S. Camp MD , Nathan L. Scott MD , Jolene C. Rudell MD, PhD , Robert N. Weinreb MD , Sally L. Baxter MD, MSc , David B. Granet MD, MHCM","doi":"10.1016/j.xops.2024.100600","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><p>Large language models such as ChatGPT have demonstrated significant potential in question-answering within ophthalmology, but there is a paucity of literature evaluating its ability to generate clinical assessments and discussions. The objectives of this study were to (1) assess the accuracy of assessment and plans generated by ChatGPT and (2) evaluate ophthalmologists’ abilities to distinguish between responses generated by clinicians versus ChatGPT.</p></div><div><h3>Design</h3><p>Cross-sectional mixed-methods study.</p></div><div><h3>Subjects</h3><p>Sixteen ophthalmologists from a single academic center, of which 10 were board-eligible and 6 were board-certified, were recruited to participate in this study.</p></div><div><h3>Methods</h3><p>Prompt engineering was used to ensure ChatGPT output discussions in the style of the ophthalmologist author of the Medical College of Wisconsin Ophthalmic Case Studies. Cases where ChatGPT accurately identified the primary diagnoses were included and then paired. Masked human-generated and ChatGPT-generated discussions were sent to participating ophthalmologists to identify the author of the discussions. Response confidence was assessed using a 5-point Likert scale score, and subjective feedback was manually reviewed.</p></div><div><h3>Main Outcome Measures</h3><p>Accuracy of ophthalmologist identification of discussion author, as well as subjective perceptions of human-generated versus ChatGPT-generated discussions.</p></div><div><h3>Results</h3><p>Overall, ChatGPT correctly identified the primary diagnosis in 15 of 17 (88.2%) cases. Two cases were excluded from the paired comparison due to hallucinations or fabrications of nonuser-provided data. Ophthalmologists correctly identified the author in 77.9% ± 26.6% of the 13 included cases, with a mean Likert scale confidence rating of 3.6 ± 1.0. No significant differences in performance or confidence were found between board-certified and board-eligible ophthalmologists. Subjectively, ophthalmologists found that discussions written by ChatGPT tended to have more generic responses, irrelevant information, hallucinated more frequently, and had distinct syntactic patterns (all <em>P</em> < 0.01).</p></div><div><h3>Conclusions</h3><p>Large language models have the potential to synthesize clinical data and generate ophthalmic discussions. While these findings have exciting implications for artificial intelligence-assisted health care delivery, more rigorous real-world evaluation of these models is necessary before clinical deployment.</p></div><div><h3>Financial Disclosures</h3><p>The author(s) have no proprietary or commercial interest in any materials discussed in this article.</p></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666914524001362/pdfft?md5=1fc56cec0e121016c01c38686515b525&pid=1-s2.0-S2666914524001362-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Analysis of ChatGPT Responses to Ophthalmic Cases: Can ChatGPT Think like an Ophthalmologist?\",\"authors\":\"Jimmy S. Chen MD , Akshay J. Reddy BS , Eman Al-Sharif MD , Marissa K. Shoji MD , Fritz Gerald P. Kalaw MD , Medi Eslani MD , Paul Z. Lang MD , Malvika Arya MD , Zachary A. Koretz MD, MPH , Kyle A. Bolo MD , Justin J. Arnett MD , Aliya C. Roginiel MD, MPH , Jiun L. Do MD, PhD , Shira L. Robbins MD , Andrew S. Camp MD , Nathan L. Scott MD , Jolene C. Rudell MD, PhD , Robert N. Weinreb MD , Sally L. Baxter MD, MSc , David B. Granet MD, MHCM\",\"doi\":\"10.1016/j.xops.2024.100600\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Objective</h3><p>Large language models such as ChatGPT have demonstrated significant potential in question-answering within ophthalmology, but there is a paucity of literature evaluating its ability to generate clinical assessments and discussions. The objectives of this study were to (1) assess the accuracy of assessment and plans generated by ChatGPT and (2) evaluate ophthalmologists’ abilities to distinguish between responses generated by clinicians versus ChatGPT.</p></div><div><h3>Design</h3><p>Cross-sectional mixed-methods study.</p></div><div><h3>Subjects</h3><p>Sixteen ophthalmologists from a single academic center, of which 10 were board-eligible and 6 were board-certified, were recruited to participate in this study.</p></div><div><h3>Methods</h3><p>Prompt engineering was used to ensure ChatGPT output discussions in the style of the ophthalmologist author of the Medical College of Wisconsin Ophthalmic Case Studies. Cases where ChatGPT accurately identified the primary diagnoses were included and then paired. Masked human-generated and ChatGPT-generated discussions were sent to participating ophthalmologists to identify the author of the discussions. Response confidence was assessed using a 5-point Likert scale score, and subjective feedback was manually reviewed.</p></div><div><h3>Main Outcome Measures</h3><p>Accuracy of ophthalmologist identification of discussion author, as well as subjective perceptions of human-generated versus ChatGPT-generated discussions.</p></div><div><h3>Results</h3><p>Overall, ChatGPT correctly identified the primary diagnosis in 15 of 17 (88.2%) cases. Two cases were excluded from the paired comparison due to hallucinations or fabrications of nonuser-provided data. Ophthalmologists correctly identified the author in 77.9% ± 26.6% of the 13 included cases, with a mean Likert scale confidence rating of 3.6 ± 1.0. No significant differences in performance or confidence were found between board-certified and board-eligible ophthalmologists. Subjectively, ophthalmologists found that discussions written by ChatGPT tended to have more generic responses, irrelevant information, hallucinated more frequently, and had distinct syntactic patterns (all <em>P</em> < 0.01).</p></div><div><h3>Conclusions</h3><p>Large language models have the potential to synthesize clinical data and generate ophthalmic discussions. While these findings have exciting implications for artificial intelligence-assisted health care delivery, more rigorous real-world evaluation of these models is necessary before clinical deployment.</p></div><div><h3>Financial Disclosures</h3><p>The author(s) have no proprietary or commercial interest in any materials discussed in this article.</p></div>\",\"PeriodicalId\":74363,\"journal\":{\"name\":\"Ophthalmology science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666914524001362/pdfft?md5=1fc56cec0e121016c01c38686515b525&pid=1-s2.0-S2666914524001362-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ophthalmology science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666914524001362\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmology science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666914524001362","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
Analysis of ChatGPT Responses to Ophthalmic Cases: Can ChatGPT Think like an Ophthalmologist?
Objective
Large language models such as ChatGPT have demonstrated significant potential in question-answering within ophthalmology, but there is a paucity of literature evaluating its ability to generate clinical assessments and discussions. The objectives of this study were to (1) assess the accuracy of assessment and plans generated by ChatGPT and (2) evaluate ophthalmologists’ abilities to distinguish between responses generated by clinicians versus ChatGPT.
Design
Cross-sectional mixed-methods study.
Subjects
Sixteen ophthalmologists from a single academic center, of which 10 were board-eligible and 6 were board-certified, were recruited to participate in this study.
Methods
Prompt engineering was used to ensure ChatGPT output discussions in the style of the ophthalmologist author of the Medical College of Wisconsin Ophthalmic Case Studies. Cases where ChatGPT accurately identified the primary diagnoses were included and then paired. Masked human-generated and ChatGPT-generated discussions were sent to participating ophthalmologists to identify the author of the discussions. Response confidence was assessed using a 5-point Likert scale score, and subjective feedback was manually reviewed.
Main Outcome Measures
Accuracy of ophthalmologist identification of discussion author, as well as subjective perceptions of human-generated versus ChatGPT-generated discussions.
Results
Overall, ChatGPT correctly identified the primary diagnosis in 15 of 17 (88.2%) cases. Two cases were excluded from the paired comparison due to hallucinations or fabrications of nonuser-provided data. Ophthalmologists correctly identified the author in 77.9% ± 26.6% of the 13 included cases, with a mean Likert scale confidence rating of 3.6 ± 1.0. No significant differences in performance or confidence were found between board-certified and board-eligible ophthalmologists. Subjectively, ophthalmologists found that discussions written by ChatGPT tended to have more generic responses, irrelevant information, hallucinated more frequently, and had distinct syntactic patterns (all P < 0.01).
Conclusions
Large language models have the potential to synthesize clinical data and generate ophthalmic discussions. While these findings have exciting implications for artificial intelligence-assisted health care delivery, more rigorous real-world evaluation of these models is necessary before clinical deployment.
Financial Disclosures
The author(s) have no proprietary or commercial interest in any materials discussed in this article.