Abera Saeed MChD , Robyn H. Guymer MBBS, PhD , Xavier Hadoux MEng, PhD , Maxime Jannaud MEng , Darvy Dang BOrth(Hons) , Lauren A.B. Hodgson MPH , Emily K. Glover OD , Erin E. Gee BAppSc(MedRad) , Peter van Wijngaarden MBBS(Hons), PhD , Zhichao Wu BAppSc(Optom), PhD
{"title":"Customized Evaluation of Progressive Visual Sensitivity Loss in Geographic Atrophy to Improve the Power of Clinical Trials","authors":"Abera Saeed MChD , Robyn H. Guymer MBBS, PhD , Xavier Hadoux MEng, PhD , Maxime Jannaud MEng , Darvy Dang BOrth(Hons) , Lauren A.B. Hodgson MPH , Emily K. Glover OD , Erin E. Gee BAppSc(MedRad) , Peter van Wijngaarden MBBS(Hons), PhD , Zhichao Wu BAppSc(Optom), PhD","doi":"10.1016/j.xops.2025.100763","DOIUrl":"10.1016/j.xops.2025.100763","url":null,"abstract":"<div><h3>Purpose</h3><div>To evaluate the effectiveness of different approaches for customizing the selection of a subset of test locations on defect-mapping microperimetry (DMP) for improving the detection of progressive visual function decline in geographic atrophy (GA).</div></div><div><h3>Design</h3><div>Prospective longitudinal study.</div></div><div><h3>Participants</h3><div>Sixty eyes from 53 participants with GA secondary to age-related macular degeneration.</div></div><div><h3>Methods</h3><div>Participants underwent 3-monthly DMP testing twice at each visit for up to 24 months, where the extent of deep visual sensitivity losses on each test was determined through single presentations of 10-decibel stimuli at 208 locations within the central 8° radius region. Seven outcome measures were derived, which included evaluating the proportion of locations missed (PLM; showing nonresponse to stimuli) on DMP in a subset of test locations based on their proximity to the GA margin, or to locations neighboring repeatably nonresponding points on 2 baseline tests (i.e., missed both tests at baseline). These outcome measures were compared by their coefficient of variation (CoV; reflecting performance for capturing longitudinal changes) and sample size estimates in a 2-arm trial seeking to detect a ≥30% treatment effect. Changes in GA extent and best-corrected visual acuity (BCVA) were evaluated for comparison.</div></div><div><h3>Main Outcome Measures</h3><div>Coefficient of variation and sample size estimates.</div></div><div><h3>Results</h3><div>Evaluating PLM at points immediately adjacent (<1°) to repeatably nonresponding test locations at baseline (CoV = 47%) was the best performing outcome measure on DMP testing. This measure outperformed BCVA (CoV = 188%; <em>P</em> < 0.001) at detecting longitudinal changes and was comparable to evaluating GA extent (CoV = 58%; <em>P</em> = 0.097). Sample size requirements in a 24-month trial using this outcome measure on DMP testing were lower by 46% and 94% compared with evaluating GA extent and BCVA, respectively.</div></div><div><h3>Conclusions</h3><div>Customized evaluation of DMP functional testing results in regions adjacent to repeatably nonresponding locations at baseline improved the detection of longitudinal changes compared with the evaluation of all test locations. These findings show that it is possible to sensitively capture progressive visual function decline with this approach, supporting its use in future GA treatment trials.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 4","pages":"Article 100763"},"PeriodicalIF":3.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143815883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Binh Duong Giap PhD , Dena Ballouz MD , Karthik Srinivasan MD, MS , Jefferson Lustre BS , Keely Likosky BS , Ossama Mahmoud MD , Shahzad I. Mian MD , Bradford L. Tannen MD, JD , Nambi Nallasamy MD
{"title":"CatSkill: Artificial Intelligence-Based Metrics for the Assessment of Surgical Skill Level from Intraoperative Cataract Surgery Video Recordings","authors":"Binh Duong Giap PhD , Dena Ballouz MD , Karthik Srinivasan MD, MS , Jefferson Lustre BS , Keely Likosky BS , Ossama Mahmoud MD , Shahzad I. Mian MD , Bradford L. Tannen MD, JD , Nambi Nallasamy MD","doi":"10.1016/j.xops.2025.100764","DOIUrl":"10.1016/j.xops.2025.100764","url":null,"abstract":"<div><h3>Purpose</h3><div>To develop and validate a novel artificial intelligence (AI)–powered video analysis system to assess surgeon proficiency in maintaining (1) eye neutrality, (2) eye centration, and (3) adequate focus of the operating microscope in cataract surgery and evaluate differences in these metrics between attending cataract surgeons and ophthalmology residents.</div></div><div><h3>Design</h3><div>A retrospective surgical video analysis.</div></div><div><h3>Subjects</h3><div>Six hundred twenty complete surgical video recordings of 620 cataract surgeries performed by either attending surgeons or ophthalmology residents.</div></div><div><h3>Main Outcome Measures</h3><div>Performance of the proposed AI-powered video analysis system (CatSkill) for cataract surgery was evaluated at multiple stages. Anatomy and surgical landmark segmentation were reported as Dice coefficients. The proposed cataract surgery assessment metrics (CSAMs) were compared between attending and resident surgeons on a phase-wise basis. Surgery-level classification performance (attending vs. resident) of a machine learning (ML) algorithm trained on the CSAMs was assessed using area under the receiver operating characteristic curve (AUC).</div></div><div><h3>Methods</h3><div>An automated system involving video preprocessing, deep learning–based segmentation with limbus obstruction detection and compensation, and CSAM computation was designed to assess surgeon performance based on surgical videos. Three CSAMs were computed to analyze 430 cataract surgeries (254 attendings and 176 residents). An ML algorithm was developed to predict surgeon training level using only CSAMs.</div></div><div><h3>Results</h3><div>The CatSkill system using FPN (VGG16) achieved a Dice coefficient of 94.03% for segmentation of palpebral fissure, limbus, and Purkinje image 1. The phase-wise mean CSAM scores were higher for attendings than residents across all surgical phases. Residents struggled with stability/centration during the Main Wound, Cortical Removal, Lens Insertion, and Wound Closure phases, and had difficulty maintaining adequate microscope focus during later phases of surgery. A random forest model using CSAMs achieved an AUC of 0.865 in predicting the skill level (attending or resident) of the surgeon.</div></div><div><h3>Conclusions</h3><div>The proposed AI-derived CSAMs provide a high level of reliability in assessing the ability of surgeons to maintain eye neutrality, centration, and focus level during cataract surgery. Furthermore, downstream analysis using an ML model for surgical-level classification indicates that the proposed CSAMs provide significant predictive value for assessing the overall training level of the surgeon.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 4","pages":"Article 100764"},"PeriodicalIF":3.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143886011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nickolas Chen MD , Katie M. Litts PhD , Danica Nikezic BS , Christopher S. Langlo MD, PhD , Brian P. Higgins BS , Byron L. Lam MD , Gerald A. Fishman MD , Frederick T. Collison OD , Mark E. Pennesi MD, PhD , Christine N. Kay MD , Sergey Tarima PhD , Joseph Carroll PhD
{"title":"Longitudinal Imaging of the Parafoveal Cone Mosaic in Congenital Achromatopsia","authors":"Nickolas Chen MD , Katie M. Litts PhD , Danica Nikezic BS , Christopher S. Langlo MD, PhD , Brian P. Higgins BS , Byron L. Lam MD , Gerald A. Fishman MD , Frederick T. Collison OD , Mark E. Pennesi MD, PhD , Christine N. Kay MD , Sergey Tarima PhD , Joseph Carroll PhD","doi":"10.1016/j.xops.2025.100765","DOIUrl":"10.1016/j.xops.2025.100765","url":null,"abstract":"<div><h3>Purpose</h3><div>To assess longitudinal changes in parafoveal cone density in individuals with congenital achromatopsia (ACHM).</div></div><div><h3>Design</h3><div>Retrospective longitudinal study.</div></div><div><h3>Participants</h3><div>Nineteen individuals (7 women and 12 men) with genetically confirmed ACHM. To be eligible, each had adaptive optics scanning light ophthalmoscope (AOSLO) images of the photoreceptor mosaic from ≥2 time points.</div></div><div><h3>Methods</h3><div>For each individual, follow-up AOSLO montages were aligned to their baseline montage. Notably, 100 × 100 μm regions of interest (ROIs) were extracted from the split-detection modality at locations 1°, 5°, and 10° temporal (T) from the peak cone density in each montage. All ROIs from follow-up visits were then manually aligned to their respective baseline ROI for that location. Cones were identified in each ROI by one observer, reviewed by a second observer, and confirmed together in a masked fashion. Cone density was calculated, and a linear mixed model was used to assess changes in density over time. A Wald test was performed to determine if the cone density changes were statistically significant.</div></div><div><h3>Main Outcome Measures</h3><div>Parafoveal cone spacing (at 1°, 5°, and 10° T) as a function of time.</div></div><div><h3>Results</h3><div>The mean (± standard deviation [SD]) age at baseline was 21.6 ± 10.7 years and the mean (±SD) follow-up period was 3.83 ± 2.93 years (range, 0.46–8.66 years). At 1° T, we observed a significant decrease of 352 cones/mm<sup>2</sup> per year (<em>P</em> = 0.0003). At 5° T, the linear mixed model showed a nonstatistically significant decrease of 58 cones/mm<sup>2</sup> per year (<em>P</em> = 0.504). At 10° T, we observed a significant decrease of 139 cones/mm<sup>2</sup> per year (<em>P</em> = 0.0188). For a 100 × 100 μm ROI, these density changes correspond to a reduction of between about 0.5 and 4 cones per year, depending on the location.</div></div><div><h3>Conclusions</h3><div>Parafoveal cone density estimates in ACHM show a small decrease over time. These observed changes are within the previously reported longitudinal repeatability values for normal retinas, suggesting the observed average cone loss may not be clinically meaningful. Further studies with longer follow-up times and more genetically heterogeneous and age-diverse populations are needed to better understand factors contributing to changes in foveal and parafoveal cone structure in ACHM over time.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosures may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 4","pages":"Article 100765"},"PeriodicalIF":3.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rupert Kamnig MD, Noah Robatsch, Anna Hillenmayer MD, Denise Vogt MD, Susanna F. König MD, Efstathios Vounotrypidis MD, Armin Wolf MD, Christian M. Wertheimer MD
{"title":"A Neural Network for the Prediction of the Visual Acuity Gained from Vitrectomy and Peeling for Epiretinal Membrane","authors":"Rupert Kamnig MD, Noah Robatsch, Anna Hillenmayer MD, Denise Vogt MD, Susanna F. König MD, Efstathios Vounotrypidis MD, Armin Wolf MD, Christian M. Wertheimer MD","doi":"10.1016/j.xops.2025.100762","DOIUrl":"10.1016/j.xops.2025.100762","url":null,"abstract":"<div><h3>Purpose</h3><div>A significant proportion of patients with epiretinal membrane (ERM) demonstrate improvement in visual acuity (VA) 3 months after pars plana vitrectomy (PPV) and membrane peeling. The identification of these patients before surgery is clinically relevant.</div></div><div><h3>Design</h3><div>This retrospective study was conducted to establish a neural network to predict improvement using preoperative clinical factors and OCT.</div></div><div><h3>Subjects</h3><div>A total of 427 eyes from 423 patients who underwent a PPV for primary idiopathic ERM combined with or without cataract surgery were included.</div></div><div><h3>Methods</h3><div>The data were automatically labeled according to whether an improvement of at least 2 logarithm of the minimum angle of resolution lines was observed. A multilayer perceptron was trained using a set of 7 clinical factors. The images were processed using a convolutional network. The output of both networks was concatenated and presented to a second multilayer perceptron. The dataset was divided into training, validation, and test datasets.</div></div><div><h3>Main Outcome Measures</h3><div>The accuracy of the neural network on an independent test dataset for the prediction of postoperative VA was analyzed. The impact of individual clinical factors and images on performance was assessed using ablation studies and class activation maps.</div></div><div><h3>Results</h3><div>The clinical factors alone demonstrated the highest accuracy of 0.74, with a sensitivity of 0.82 and a specificity of 0.67. These results were obtained after the exclusion of less significant factors in an ablation study. The inclusion of the factors age, preoperative lens status, preoperative VA, and the distinction between combined phacovitrectomy and vitrectomy yielded the most accurate results. In contrast, the use of ResNet18 as a neural network for image processing alone (0.61) or images combined with clinical factors (0.70) resulted in reduced accuracy. In the class activation map, image regions corresponding to the outer, central, and inner retina appeared to be important for the decision-making process.</div></div><div><h3>Conclusions</h3><div>Our neural network has yielded favorable results in predicting improvement in VA in approximately 3-quarters of patients. This artificial intelligence–based personalized therapeutic strategy has the potential to aid decision-making. Future studies are to assess the clinical potential and generalizability and improve accuracy by including a more extensive dataset.</div></div><div><h3>Financial Disclosure(s)</h3><div>The author(s) have no proprietary or commercial interest in any materials discussed in this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 4","pages":"Article 100762"},"PeriodicalIF":3.2,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143828658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vision-Threatening Diabetic Macular Ischemia Based on Inferred Progression Pathways in OCT Angiography","authors":"Miyo Yoshida MD, Tomoaki Murakami MD, PhD, Keiichi Nishikawa MD, Kenji Ishihara MD, PhD, Yuki Mori MD, PhD, Akitaka Tsujikawa MD, PhD","doi":"10.1016/j.xops.2025.100761","DOIUrl":"10.1016/j.xops.2025.100761","url":null,"abstract":"<div><h3>Purpose</h3><div>To elucidate the progression pathways of diabetic macular ischemia (DMI) using OCT angiography (OCTA) images and to assess changes in visual acuity (VA) associated with each pathway.</div></div><div><h3>Design</h3><div>A single-center, prospective case series study.</div></div><div><h3>Participants</h3><div>One hundred fifty-one eyes from 151 patients with a 3-year follow-up period.</div></div><div><h3>Methods</h3><div>We obtained 3 × 3 mm swept-source OCTA images and conducted analyses of en face images within a central 2.5 mm diameter circle. Nonperfusion squares (NPSs) were defined as 15 × 15-pixel squares without retinal vessels. Each eye at baseline and after 3 years was embedded into a 2-dimensional uniform manifold approximation and projection space and assigned to 1 of 5 severity grades—<em>Initial</em>, <em>Mild</em>, <em>Superficial</em>, <em>Moderate</em>, and <em>Severe</em>—using the k-nearest neighbors method. We assessed major transitions (involving ≥4 cases) during 3 years. Subsequent probabilistic analyses enabled the construction of a graphical model, wherein directed arrows represented inferred pathways of DMI progression. From this cohort, 103 eyes of 103 patients who did not receive any ocular treatments during the follow-up period were subsequently evaluated for VA changes.</div></div><div><h3>Main Outcome Measures</h3><div>Inference of DMI progression pathways.</div></div><div><h3>Results</h3><div>In most cases, NPS counts increased in both the superficial and deep layers. The major transitions between these severity groups at 3 years displayed a unique distribution, and probabilistic analyses suggested a directed graphical model comprising 7 inferred pathways of DMI progression: <em>Initial</em> to <em>Mild</em>, <em>Initial</em> to <em>Superficial</em>, <em>Mild</em> to <em>Superficial</em>, <em>Mild</em> to <em>Moderate</em>, <em>Superficial</em> to <em>Moderate</em>, <em>Superficial</em> to <em>Severe</em>, and <em>Moderate</em> to <em>Severe.</em> Eyes of the <em>Mild</em> and <em>Superfi</em>cial groups had greater increases in superficial NPS within the central sector than those of the <em>Severe</em> group. Additionally, deep NPS counts within the central sector decreased more in the eyes of the <em>Initial</em> group than in those of the <em>Superficial</em> and <em>Moderate</em> groups. Notably, the eyes of the <em>Superficial</em> and <em>Moderate</em> groups exhibited greater VA deterioration at 3 years compared with those in the <em>Initial</em> group.</div></div><div><h3>Conclusions</h3><div>A directed graphical model of DMI progression may serve as a useful tool for inferring progression pathways and predicting VA deterioration.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 4","pages":"Article 100761"},"PeriodicalIF":3.2,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chui Ming Gemmy Cheung MD, FRCOphth , Timothy L. Jackson PhD, FRCOphth , Charles C. Wykoff MD, PhD , Arshad M. Khanani MD, FASRS , Ian M. Leitch PhD , Megan E. Baldwin PhD , Jason Slakter MD
{"title":"Sozinibercept (Anti-VEGF-C/-D) Combined with Ranibizumab for Polypoidal Choroidal Vasculopathy: Phase IIb Predefined Subgroup Analysis","authors":"Chui Ming Gemmy Cheung MD, FRCOphth , Timothy L. Jackson PhD, FRCOphth , Charles C. Wykoff MD, PhD , Arshad M. Khanani MD, FASRS , Ian M. Leitch PhD , Megan E. Baldwin PhD , Jason Slakter MD","doi":"10.1016/j.xops.2025.100759","DOIUrl":"10.1016/j.xops.2025.100759","url":null,"abstract":"<div><h3>Purpose</h3><div>The aim of this study was to assess the efficacy of sozinibercept, a novel “trap” inhibitor of VEGF-C and VEGF-D, when combined with ranibizumab for the treatment of polypoidal choroidal vasculopathy (PCV).</div></div><div><h3>Design</h3><div>Prespecified subgroup analysis of a randomized, double-masked, sham-controlled phase IIb trial.</div></div><div><h3>Participants</h3><div>Adults with treatment-naïve neovascular age-related macular degeneration.</div></div><div><h3>Methods</h3><div>Participants were randomized 1:1:1 to receive a total of 6 intravitreal injections of ranibizumab 0.5 mg given 4-weekly, in combination with either 0.5 mg sozinibercept, 2 mg sozinibercept, or sham injection (control). Active PCV was determined at baseline by masked readers at an independent imaging center based on multimodal imaging, including OCT (notched, sharply peaked, or multilobular pigment epithelial detachments with or without a ring of hyperreflectivity along the inner border), fundus photography (subretinal orange nodules), and fluorescein angiography (typical primarily occult multifocal lesions).</div></div><div><h3>Main Outcome Measures</h3><div>The primary end point was mean change from baseline in best-corrected visual acuity (BCVA) through week 24. Secondary end points included categorical changes in BCVA from baseline, anatomical changes in lesion morphology, and safety.</div></div><div><h3>Results</h3><div>Of 366 participants, PCV was identified in 66 (18%) using predefined criteria. Sozinibercept combination therapy produced a dose response, with a mean BCVA change from baseline to week 24 of +13.54 (2 mg, n = 22) and +10.87 (0.5 mg, n = 24) letters compared with +6.9 letters for ranibizumab (n = 20), respectively. The 2 mg sozinibercept combination group had a superior BCVA gain versus ranibizumab (+6.7 letter difference in least squares mean; <em>P</em> = 0.0253) with more participants gaining ≥10 letters (77.3 vs. 47.4%) and ≥15 letters (40.9 vs. 31.6%) and fewer losing ≥5 letters (4.5 vs. 15.8%). Anatomic responses were consistent with functional outcomes and at week 24, fewer participants in the 2 mg sozinibercept combination group had subretinal fluid (19%) or intraretinal cysts (9.1%) than with ranibizumab monotherapy (42.1% and 25%, respectively). The safety profile of sozinibercept combination therapy was similar to ranibizumab.</div></div><div><h3>Conclusions</h3><div>In this predefined phase IIb subgroup of patients with PCV, sozinibercept combination therapy through inhibition of VEGF-C/-D achieved improved visual and anatomic outcomes compared with ranibizumab monotherapy consistent with the overall population.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 4","pages":"Article 100759"},"PeriodicalIF":3.2,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143833411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongkang Wu MS , Kai Jin MD, PhD , Yiyang Jing MD , Wenyue Shen MD , Yih Chung Tham PhD , Xiangji Pan PhD , Victor Koh MD, PhD , Andrzej Grzybowski MD, PhD , Juan Ye MD, PhD
{"title":"Diabetic Retinopathy Assessment through Multitask Learning Approach on Heterogeneous Fundus Image Datasets","authors":"Hongkang Wu MS , Kai Jin MD, PhD , Yiyang Jing MD , Wenyue Shen MD , Yih Chung Tham PhD , Xiangji Pan PhD , Victor Koh MD, PhD , Andrzej Grzybowski MD, PhD , Juan Ye MD, PhD","doi":"10.1016/j.xops.2025.100755","DOIUrl":"10.1016/j.xops.2025.100755","url":null,"abstract":"<div><h3>Objective</h3><div>To develop and validate an artificial intelligence (AI)-based system, Diabetic Retinopathy Analysis Model Assistant (DRAMA), for diagnosing diabetic retinopathy (DR) across multisource heterogeneous datasets and aimed at improving the diagnostic accuracy and efficiency.</div></div><div><h3>Design</h3><div>This was a cross-sectional study conducted at Zhejiang University Eye Hospital and approved by the ethics committee.</div></div><div><h3>Subjects</h3><div>The study included 1500 retinal images from 957 participants aged 18 to 83 years. The dataset was divided into 3 subdatasets: color fundus photography, ultra-widefield imaging, and portable fundus camera. Images were annotated by 3 experienced ophthalmologists.</div></div><div><h3>Methods</h3><div>The AI system was built using EfficientNet-B2, pretrained on the ImageNet dataset. It performed 11 multilabel tasks, including image type identification, quality assessment, lesion detection, and diabetic macular edema (DME) detection. The model used LabelSmoothingCrossEntropy and AdamP optimizer to enhance robustness and convergence. The system's performance was evaluated using metrics such as accuracy, sensitivity, specificity, and area under the curve (AUC). External validation was conducted using datasets from different clinical centers.</div></div><div><h3>Main Outcome Measures</h3><div>The primary outcomes measured were the accuracy, sensitivity, specificity, and AUC of the AI system in diagnosing DR.</div></div><div><h3>Results</h3><div>After excluding 218 poor-quality images, DRAMA demonstrated high diagnostic accuracy, with EfficientNet-B2 achieving 87.02% accuracy in quality assessment and 91.60% accuracy in lesion detection. Area under the curves were >0.95 for most tasks, with 0.93 for grading and DME detection. External validation showed slightly lower accuracy in some tasks but outperformed in identifying hemorrhages and DME. Diabetic Retinopathy Analysis Model Assistant diagnosed the entire test set in 86 ms, significantly faster than the 90 to 100 minutes required by humans.</div></div><div><h3>Conclusions</h3><div>Diabetic Retinopathy Analysis Model Assistant, an AI-based multitask model, showed high potential for clinical integration, significantly improving the diagnostic efficiency and accuracy, particularly in resource-limited settings.</div></div><div><h3>Financial Disclosure(s)</h3><div>The author(s) have no proprietary or commercial interest in any materials discussed in this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 5","pages":"Article 100755"},"PeriodicalIF":3.2,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144194931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aidin C. Spina BS , Christopher D. Yang BS , Ayush Jain BS , Christine Ha BS , Lauren E. Chen MD , Philina Yee MD , Ken Y. Lin MD, PhD
{"title":"Deep Learning–Driven Glaucoma Medication Bottle Recognition: A Multilingual Clinical Validation Study in Patients with Impaired Vision","authors":"Aidin C. Spina BS , Christopher D. Yang BS , Ayush Jain BS , Christine Ha BS , Lauren E. Chen MD , Philina Yee MD , Ken Y. Lin MD, PhD","doi":"10.1016/j.xops.2025.100758","DOIUrl":"10.1016/j.xops.2025.100758","url":null,"abstract":"<div><h3>Objective</h3><div>To clinically validate a convolutional neural network (CNN)-based Android smartphone app in the identification of topical glaucoma medications for patients with glaucoma and impaired vision.</div></div><div><h3>Design</h3><div>Nonrandomized prospective crossover study.</div></div><div><h3>Participants</h3><div>The study population included a total of 20 non-English-speaking (11 Spanish and 9 Vietnamese) and 21 English-speaking patients who presented to an academic glaucoma clinic from December 2023 through September 2024. Patients with poor vision were selected on the basis of visual acuity (VA) of 20/70 or worse in 1 eye as per the California Department of Motor Vehicles' driver's license screening standard.</div></div><div><h3>Intervention</h3><div>Enrolled subjects participated in a medication identification activity in which they identified a set of 6 topical glaucoma medications presented in a randomized order. Subjects first identified half of the medications without the CNN-based app. They then identified the remaining half of the medications with the app. Responses to a standardized ease-of-use survey were collected before and after using the app.</div></div><div><h3>Main Outcome Measures</h3><div>Primary quantitative outcomes from the medication identification activity were accuracy and time. Primary qualitative outcomes from the ease-of-use survey were subjective ratings of ease of smartphone app use.</div></div><div><h3>Results</h3><div>The CNN-based mobile app achieved a mean average precision of 98.8% and recall of 97.2%. Identification accuracy significantly improved from 27.6% without the app to 99.2% with the app across all participants, with no significant change in identification time. This observed improvement in accuracy was similar among non-English-speaking (71.6%) and English-speaking (71.4%) participants. The odds ratio (OR) for identification accuracy with the app was 319.353 (<em>P</em> < 0.001), with substantial improvement in both non-English-speaking (OR = 162.779, <em>P</em> < 0.001) and English-speaking (no applicable OR given 100% identification accuracy) participants. Survey data indicated that 81% of English speakers and 30% of non-English speakers found the app “very easy” to use, with the overall ease of use strongly associating with improved accuracy.</div></div><div><h3>Conclusions</h3><div>The CNN-based mobile app significantly improves medication identification accuracy in patients with glaucomatous vision loss without increasing the time to identification. This tool has the potential to enhance adherence in both English- and non-English-speaking populations and offers a practical adjunct to daily medication management for patients with glaucoma and low VA.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 4","pages":"Article 100758"},"PeriodicalIF":3.2,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143785109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George R. Nahass BA , Emma Koehler BS , Nicholas Tomaras BS , Danny Lopez BS , Madison Cheung BS , Alexander Palacios BA , Jeffrey C. Peterson MD, PhD , Sasha Hubschman MD , Kelsey Green BS , Chad A. Purnell MD , Pete Setabutr MD , Ann Q. Tran MD , Darvin Yi PhD
{"title":"Open-Source Periorbital Segmentation Dataset for Ophthalmic Applications","authors":"George R. Nahass BA , Emma Koehler BS , Nicholas Tomaras BS , Danny Lopez BS , Madison Cheung BS , Alexander Palacios BA , Jeffrey C. Peterson MD, PhD , Sasha Hubschman MD , Kelsey Green BS , Chad A. Purnell MD , Pete Setabutr MD , Ann Q. Tran MD , Darvin Yi PhD","doi":"10.1016/j.xops.2025.100757","DOIUrl":"10.1016/j.xops.2025.100757","url":null,"abstract":"<div><h3>Objective</h3><div>We aimed to create and validate a dataset for oculoplastic segmentation and periorbital distance prediction.</div></div><div><h3>Design</h3><div>This was an experimental study.</div></div><div><h3>Subjects</h3><div>Images of faces from 2 open-source datasets were included in this study.</div></div><div><h3>Methods</h3><div>The images were sourced from 2 open-source datasets and cropped to include only the eyes. All images had the iris, sclera, lid, caruncle, and brow segmented by 5 trained annotators. Intergrader reliability analysis was done by having 5 annotators annotate the same 100 images randomly selected after at least a 2-week forgetting period. Intragrader analysis was done by having 5 annotators annotate the same 20 images after a 2-week forgetting period. Three DeepLabV3 segmentation models were trained for segmentation using the datasets following standard procedures.</div></div><div><h3>Main Outcome Measures</h3><div>The quality of the annotations was evaluated by Dice score through intragrader and intergrader experiments. Segmentation models were trained to demonstrate the dataset's utility for deep learning. The Dice score was used to evaluate deep learning models.</div></div><div><h3>Results</h3><div>We annotated 2842 images. Agreement between annotators (intergrader) on a randomly selected subset of 100 images was very high, with an average Dice score of 0.82 ± 0.01. Intragrader analysis also demonstrates that the same grader accurately reproduces annotations with an average Dice score, across all classes, of 0.81 ± 0.08. The average Dice score across all classes of a segmentation network trained on the Chicago Facial dataset, the CelebAMask-HQ dataset, and both combined was 0.90 ± 0.11, 0.81 ± 0.20, and 0.84 ± 0.18, respectively.</div></div><div><h3>Conclusions</h3><div>We have developed a first-of-its-kind dataset for use in oculoplastic and craniofacial segmentation tasks. All the annotations are publicly available for free download. Having access to segmentation datasets designed specifically for oculoplastic surgery will permit more rapid development of clinically useful segmentation networks that can be leveraged for periorbital distance prediction and other downstream tasks. In addition to the annotations, we also provide an open-source toolkit for periorbital distance prediction from segmentation masks, which are available via an application programming interface. The weights of all models have also been open-sourced and are publicly available for use by the community.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 4","pages":"Article 100757"},"PeriodicalIF":3.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143838319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}