EyePub Date : 2024-12-18DOI: 10.1038/s41433-024-03549-5
Gwyn Samuel Williams
{"title":"Why do you want to become a Doctor?","authors":"Gwyn Samuel Williams","doi":"10.1038/s41433-024-03549-5","DOIUrl":"https://doi.org/10.1038/s41433-024-03549-5","url":null,"abstract":"","PeriodicalId":12125,"journal":{"name":"Eye","volume":" ","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
EyePub Date : 2024-12-18DOI: 10.1038/s41433-024-03557-5
Andrea J Darzi, Jason W Busse, Kian Torabiardakani, Mark Phillips, Lehana Thabane, Mohit Bhandari, Enrico Borrelli, David H Steel, Charles C Wykoff, Varun Chaudhary
{"title":"Risk assessment models: considerations prior to use in clinical practice.","authors":"Andrea J Darzi, Jason W Busse, Kian Torabiardakani, Mark Phillips, Lehana Thabane, Mohit Bhandari, Enrico Borrelli, David H Steel, Charles C Wykoff, Varun Chaudhary","doi":"10.1038/s41433-024-03557-5","DOIUrl":"https://doi.org/10.1038/s41433-024-03557-5","url":null,"abstract":"","PeriodicalId":12125,"journal":{"name":"Eye","volume":" ","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
EyePub Date : 2024-12-17DOI: 10.1038/s41433-024-03500-8
Eun Hee Hong, Jiyeong Kim, Sungwho Park, Heeyoon Cho, Min Ho Kang, Albert Bromeo, Anh Ngoc Tram Tran, Amir Akhavanrezayat, Chi Mong Christopher Or, Zheng Xian Thng, Yong Un Shin, Quan Dong Nguyen
{"title":"Trends in nationwide incidence of uveitis in South Korea using the National Health Insurance claims database from 2010 to 2021.","authors":"Eun Hee Hong, Jiyeong Kim, Sungwho Park, Heeyoon Cho, Min Ho Kang, Albert Bromeo, Anh Ngoc Tram Tran, Amir Akhavanrezayat, Chi Mong Christopher Or, Zheng Xian Thng, Yong Un Shin, Quan Dong Nguyen","doi":"10.1038/s41433-024-03500-8","DOIUrl":"https://doi.org/10.1038/s41433-024-03500-8","url":null,"abstract":"<p><strong>Objectives: </strong>To investigate the population-based incidence of uveitis and the differences between anterior and non-anterior uveitis using the comprehensive Korean National Health Insurance Service (NHIS) database.</p><p><strong>Methods: </strong>We extracted data of patients who visited the clinic and were diagnosed with uveitis (based on Korean Classification of Diseases) from 2010 to 2021. The incidence of uveitis, differences between the demographics, and the underlying co-morbidities of anterior uveitis, non-anterior uveitis, and control groups were investigated.</p><p><strong>Results: </strong>We identified 919,370 cases with uveitis (anterior: 800,132; non-anterior: 119,238). The average incidences (per 10,000 persons) of anterior and non-anterior uveitis were 13.0 (95% confidence interval [CI], 12.9-13.0), and 1.9 (95% CI, 1.9-1.9), respectively. The incidence increased (2010: 13.0; 2019: 16.5) but decreased during the coronavirus disease (COVID-19) pandemic (2020: 15.5; 2021: 15.4). The non-anterior group was significantly associated with sex (female, odds ratio [OR]: 1.09, p < 0.0001), specific age range (40-69 years, p < 0.0001), high Charlson Comorbidity Index (p < 0.0001), high household income (p < 0.0001), and various immunologic diseases (antiphospholipid antibody syndrome, OR: 1.79, p < 0.0001; systemic lupus erythematosus, OR: 1.22, p < 0.0001; psoriasis, OR: 1.13, p < 0.0001; ulcerative colitis, OR: 1.11, p = 0.0013; tuberculosis, OR: 1.09, p < 0.0001; rheumatoid arthritis, OR: 1.05, p < 0.0001) compared with the anterior group.</p><p><strong>Conclusions: </strong>Using the NHIS database, we conducted the largest population-based epidemiological study on uveitis in South Korea to estimate its increasing incidence in the past decade (including changes during COVID-19 pandemic) as well as its anatomical distribution. Our results may be beneficial for estimating the national burden of uveitis.</p>","PeriodicalId":12125,"journal":{"name":"Eye","volume":" ","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142846282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Benchmarking the performance of large language models in uveitis: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, Google Gemini, and Anthropic Claude3.","authors":"Fang-Fang Zhao, Han-Jie He, Jia-Jian Liang, Jingyun Cen, Yun Wang, Hongjie Lin, Feifei Chen, Tai-Ping Li, Jian-Feng Yang, Lan Chen, Ling-Ping Cen","doi":"10.1038/s41433-024-03545-9","DOIUrl":"10.1038/s41433-024-03545-9","url":null,"abstract":"<p><strong>Background/objective: </strong>This study aimed to evaluate the accuracy, comprehensiveness, and readability of responses generated by various Large Language Models (LLMs) (ChatGPT-3.5, Gemini, Claude 3, and GPT-4.0) in the clinical context of uveitis, utilizing a meticulous grading methodology.</p><p><strong>Methods: </strong>Twenty-seven clinical uveitis questions were presented individually to four Large Language Models (LLMs): ChatGPT (versions GPT-3.5 and GPT-4.0), Google Gemini, and Claude. Three experienced uveitis specialists independently assessed the responses for accuracy using a three-point scale across three rounds with a 48-hour wash-out interval. The final accuracy rating for each LLM response ('Excellent', 'Marginal', or 'Deficient') was determined through a majority consensus approach. Comprehensiveness was evaluated using a three-point scale for responses rated 'Excellent' in the final accuracy assessment. Readability was determined using the Flesch-Kincaid Grade Level formula. Statistical analyses were conducted to discern significant differences among LLMs, employing a significance threshold of p < 0.05.</p><p><strong>Results: </strong>Claude 3 and ChatGPT 4 demonstrated significantly higher accuracy compared to Gemini (p < 0.001). Claude 3 also showed the highest proportion of 'Excellent' ratings (96.3%), followed by ChatGPT 4 (88.9%). ChatGPT 3.5, Claude 3, and ChatGPT 4 had no responses rated as 'Deficient', unlike Gemini (14.8%) (p = 0.014). ChatGPT 4 exhibited greater comprehensiveness compared to Gemini (p = 0.008), and Claude 3 showed higher comprehensiveness compared to Gemini (p = 0.042). Gemini showed significantly better readability compared to ChatGPT 3.5, Claude 3, and ChatGPT 4 (p < 0.001). Gemini also had fewer words, letter characters, and sentences compared to ChatGPT 3.5 and Claude 3.</p><p><strong>Conclusions: </strong>Our study highlights the outstanding performance of Claude 3 and ChatGPT 4 in providing precise and thorough information regarding uveitis, surpassing Gemini. ChatGPT 4 and Claude 3 emerge as pivotal tools in improving patient understanding and involvement in their uveitis healthcare journey.</p>","PeriodicalId":12125,"journal":{"name":"Eye","volume":" ","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142846278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
EyePub Date : 2024-12-17DOI: 10.1038/s41433-024-03541-z
Jiahe Nie, Yang Gao, Rong Lu
{"title":"Paediatric acute myeloid leukaemia presenting initially with an orbital mass.","authors":"Jiahe Nie, Yang Gao, Rong Lu","doi":"10.1038/s41433-024-03541-z","DOIUrl":"https://doi.org/10.1038/s41433-024-03541-z","url":null,"abstract":"","PeriodicalId":12125,"journal":{"name":"Eye","volume":" ","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142846281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
EyePub Date : 2024-12-16DOI: 10.1038/s41433-024-03552-w
Duncan Marston, Andrea Noah Paris, Nicole Galdes, Sara Xuereb
{"title":"Comment on: \"Alpha-1 antagonist treatment for eyelid retraction in patients with thyroid eye disease-a prospective pilot study\".","authors":"Duncan Marston, Andrea Noah Paris, Nicole Galdes, Sara Xuereb","doi":"10.1038/s41433-024-03552-w","DOIUrl":"https://doi.org/10.1038/s41433-024-03552-w","url":null,"abstract":"","PeriodicalId":12125,"journal":{"name":"Eye","volume":" ","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142834834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
EyePub Date : 2024-12-16DOI: 10.1038/s41433-024-03476-5
Qais A Dihan, Andrew D Brown, Muhammad Z Chauhan, Ahmad F Alzein, Seif E Abdelnaem, Sean D Kelso, Dania A Rahal, Royce Park, Mohammadali Ashraf, Amr Azzam, Mahmoud Morsi, David B Warner, Ahmed B Sallam, Hajirah N Saeed, Abdelrahman M Elhusseiny
{"title":"Leveraging large language models to improve patient education on dry eye disease.","authors":"Qais A Dihan, Andrew D Brown, Muhammad Z Chauhan, Ahmad F Alzein, Seif E Abdelnaem, Sean D Kelso, Dania A Rahal, Royce Park, Mohammadali Ashraf, Amr Azzam, Mahmoud Morsi, David B Warner, Ahmed B Sallam, Hajirah N Saeed, Abdelrahman M Elhusseiny","doi":"10.1038/s41433-024-03476-5","DOIUrl":"https://doi.org/10.1038/s41433-024-03476-5","url":null,"abstract":"<p><strong>Background/objectives: </strong>Dry eye disease (DED) is an exceedingly common diagnosis in patients, yet recent analyses have demonstrated patient education materials (PEMs) on DED to be of low quality and readability. Our study evaluated the utility and performance of three large language models (LLMs) in enhancing and generating new patient education materials (PEMs) on dry eye disease (DED).</p><p><strong>Subjects/methods: </strong>We evaluated PEMs generated by ChatGPT-3.5, ChatGPT-4, Gemini Advanced, using three separate prompts. Prompts A and B requested they generate PEMs on DED, with Prompt B specifying a 6th-grade reading level, using the SMOG (Simple Measure of Gobbledygook) readability formula. Prompt C asked for a rewrite of existing PEMs at a 6th-grade reading level. Each PEM was assessed on readability (SMOG, FKGL: Flesch-Kincaid Grade Level), quality (PEMAT: Patient Education Materials Assessment Tool, DISCERN), and accuracy (Likert Misinformation scale).</p><p><strong>Results: </strong>All LLM-generated PEMs in response to Prompt A and B were of high quality (median DISCERN = 4), understandable (PEMAT understandability ≥70%) and accurate (Likert Score=1). LLM-generated PEMs were not actionable (PEMAT Actionability <70%). ChatGPT-4 and Gemini Advanced rewrote existing PEMs (Prompt C) from a baseline readability level (FKGL: 8.0 ± 2.4, SMOG: 7.9 ± 1.7) to targeted 6th-grade reading level; rewrites contained little to no misinformation (median Likert misinformation=1 (range: 1-2)). However, only ChatGPT-4 rewrote PEMs while maintaining high quality and reliability (median DISCERN = 4).</p><p><strong>Conclusion: </strong>LLMs (notably ChatGPT-4) were able to generate and rewrite PEMs on DED that were readable, accurate, and high quality. Our study underscores the value of leveraging LLMs as supplementary tools to improving PEMs.</p>","PeriodicalId":12125,"journal":{"name":"Eye","volume":" ","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142834835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}