Ophthalmology science最新文献

筛选
英文 中文
AlphaMissense Predictions and ClinVar Annotations: A Deep Learning Approach to Uveal Melanoma
IF 3.2
Ophthalmology science Pub Date : 2024-12-06 DOI: 10.1016/j.xops.2024.100673
David J. Taylor Gonzalez MD , Mak B. Djulbegovic MD, MSc , Meghan Sharma MD, MPH , Michael Antonietti BS , Colin K. Kim BS , Vladimir N. Uversky PhD, DSc , Carol L. Karp MD , Carol L. Shields MD , Matthew W. Wilson MD
{"title":"AlphaMissense Predictions and ClinVar Annotations: A Deep Learning Approach to Uveal Melanoma","authors":"David J. Taylor Gonzalez MD ,&nbsp;Mak B. Djulbegovic MD, MSc ,&nbsp;Meghan Sharma MD, MPH ,&nbsp;Michael Antonietti BS ,&nbsp;Colin K. Kim BS ,&nbsp;Vladimir N. Uversky PhD, DSc ,&nbsp;Carol L. Karp MD ,&nbsp;Carol L. Shields MD ,&nbsp;Matthew W. Wilson MD","doi":"10.1016/j.xops.2024.100673","DOIUrl":"10.1016/j.xops.2024.100673","url":null,"abstract":"<div><h3>Objective</h3><div>Uveal melanoma (UM) poses significant diagnostic and prognostic challenges due to its variable genetic landscape. We explore the use of a novel deep learning tool to assess the functional impact of genetic mutations in UM.</div></div><div><h3>Design</h3><div>A cross-sectional bioinformatics exploratory data analysis of genetic mutations from UM cases.</div></div><div><h3>Subjects</h3><div>Genetic data from patients diagnosed with UM were analyzed, explicitly focusing on missense mutations sourced from the Catalogue of Somatic Mutations in Cancer (COSMIC) database.</div></div><div><h3>Methods</h3><div>We identified missense mutations frequently observed in UM using the COSMIC database, assessed their potential pathogenicity using AlphaMissense, and visualized mutations using AlphaFold. Clinical significance was cross-validated with entries in the ClinVar database.</div></div><div><h3>Main Outcome Measures</h3><div>The primary outcomes measured were the agreement rates between AlphaMissense predictions and ClinVar annotations regarding the pathogenicity of mutations in critical genes associated with UM, such as <em>GNAQ, GNA11, SF3B1, EIF1AX</em>, and <em>BAP1</em>.</div></div><div><h3>Results</h3><div>Missense substitutions comprised 91.35% (n = 1310) of mutations in UM found on COSMIC. Of the 151 unique missense mutations analyzed in the most frequently mutated genes, only 40.4% (n = 61) had corresponding data in ClinVar. Notably, AlphaMissense provided definitive classifications for 27.2% (n = 41) of the mutations, which were labeled as “unknown significance” in ClinVar, underscoring its potential to offer more clarity in ambiguous cases. When excluding these mutations of uncertain significance, AlphaMissense showed perfect agreement (100%) with ClinVar across all analyzed genes, demonstrating no discrepancies where a mutation predicted as “pathogenic” was classified as “benign” or vice versa.</div></div><div><h3>Conclusions</h3><div>Integrating deep learning through AlphaMissense offers a promising approach to understanding the mutational landscape of UM. Our methodology holds the potential to improve genomic diagnostics and inform the development of personalized treatment strategies for UM.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 3","pages":"Article 100673"},"PeriodicalIF":3.2,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence Models to Identify Patients with High Probability of Glaucoma Using Electronic Health Records
IF 3.2
Ophthalmology science Pub Date : 2024-12-06 DOI: 10.1016/j.xops.2024.100671
Rohith Ravindranath MS, Sophia Y. Wang MD, MS
{"title":"Artificial Intelligence Models to Identify Patients with High Probability of Glaucoma Using Electronic Health Records","authors":"Rohith Ravindranath MS,&nbsp;Sophia Y. Wang MD, MS","doi":"10.1016/j.xops.2024.100671","DOIUrl":"10.1016/j.xops.2024.100671","url":null,"abstract":"<div><h3>Purpose</h3><div>Early detection of glaucoma allows for timely treatment to prevent severe vision loss, but screening requires resource-intensive examinations and imaging, which are challenging for large-scale implementation and evaluation. The purpose of this study was to develop artificial intelligence models that can utilize the wealth of data stored in electronic health records (EHRs) to identify patients who have high probability of developing glaucoma, without the use of any dedicated ophthalmic imaging or clinical data.</div></div><div><h3>Design</h3><div>Cohort study.</div></div><div><h3>Participants</h3><div>A total of 64 735 participants who were ≥18 years of age and had ≥2 separate encounters with eye-related diagnoses recorded in their EHR records in the All of Us Research Program, a national multicenter cohort of patients contributing EHR and survey data, and who were enrolled from May 1, 2018, to July 1, 2022.</div></div><div><h3>Methods</h3><div>We developed models to predict which patients had a diagnosis of glaucoma, using the following machine learning approaches: (1) penalized logistic regression, (2) XGBoost, and (3) a deep learning architecture that included a 1-dimensional convolutional neural network (1D-CNN) and stacked autoencoders. Model input features included demographics and only the nonophthalmic lab results, measurements, medications, and diagnoses available from structured EHR data.</div></div><div><h3>Main Outcome Measures</h3><div>Evaluation metrics included area under the receiver operating characteristic curve (AUROC).</div></div><div><h3>Results</h3><div>Of 64 735 patients, 7268 (11.22%) had a glaucoma diagnosis. Overall, AUROC ranged from 0.796 to 0.863. The 1D-CNN model achieved the highest performance with an AUROC score of 0.863 (95% confidence interval [CI], 0.862–0.864). Investigation of 1D-CNN model performance stratified by race/ethnicity showed that AUROC ranged from 0.825 to 0.869 by subpopulation, with the highest performance of 0.869 (95% CI, 0.868–0.870) among the non-Hispanic White subpopulation.</div></div><div><h3>Conclusions</h3><div>Machine and deep learning models were able to use the extensive systematic data within EHR to identify individuals with glaucoma, without the need for ophthalmic imaging or clinical data. These models could potentially automate identifying high-risk glaucoma patients in EHRs, aiding targeted screening referrals. Additional research is needed to investigate the impact of protected class characteristics such as race/ethnicity on model performance and fairness.</div></div><div><h3>Financial Disclosure(s)</h3><div>The author(s) have no proprietary or commercial interest in any materials discussed in this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 3","pages":"Article 100671"},"PeriodicalIF":3.2,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometrical Features of Subbasal Corneal Whorl-like Nerve Patterns in Dry Eye Disease
IF 3.2
Ophthalmology science Pub Date : 2024-12-05 DOI: 10.1016/j.xops.2024.100669
Ziqing Feng MD, Kang Yu MD, Yupei Chen MS, Gengyuan Wang MS, Yuqing Deng MD, Wei Wang MD, Ruiwen Xu MD, Yimin Zhang MD, Peng Xiao PhD, Jin Yuan MD, PhD
{"title":"Geometrical Features of Subbasal Corneal Whorl-like Nerve Patterns in Dry Eye Disease","authors":"Ziqing Feng MD,&nbsp;Kang Yu MD,&nbsp;Yupei Chen MS,&nbsp;Gengyuan Wang MS,&nbsp;Yuqing Deng MD,&nbsp;Wei Wang MD,&nbsp;Ruiwen Xu MD,&nbsp;Yimin Zhang MD,&nbsp;Peng Xiao PhD,&nbsp;Jin Yuan MD, PhD","doi":"10.1016/j.xops.2024.100669","DOIUrl":"10.1016/j.xops.2024.100669","url":null,"abstract":"<div><h3>Purpose</h3><div>To investigate the geometrical feature of the whorl-like corneal nerve in dry eye disease (DED) across different severity levels and subtypes and preliminarily explore its diagnostic ability.</div></div><div><h3>Design</h3><div>Cross-sectional study.</div></div><div><h3>Participants</h3><div>The study included 29 healthy subjects (51 eyes) and 62 DED patients (95 eyes).</div></div><div><h3>Methods</h3><div>All subjects underwent comprehensive ophthalmic examinations, dry eye tests, and in vivo confocal microscopy to visualize the whorl-like corneal nerve at the inferior whorl (IW) region and the straight nerve at the central cornea. The structure of the corneal nerve was extracted and characterized using the fractal dimension (CND<sub>f</sub>), multifractal dimension (CND<sub>0</sub>), tortuosity (CNTor), fiber length (CNFL), and numbers of branching points.</div></div><div><h3>Main Outcome Measures</h3><div>The characteristics of quantified whorl-like corneal nerve metrics in different groups of severity and subtype defined by symptoms and signs of DED.</div></div><div><h3>Results</h3><div>Compared with the healthy controls, the CND<sub>f</sub>, CND<sub>0</sub>, and CNFL of the IW decreased significantly as early as grade 1 DED (<em>P</em> &lt; 0.05), whereas CNTor increased (<em>P</em> &lt; 0.05). These parameters did not change significantly in the straight nerve. As the DED severity increased, CND<sub>f</sub> and CNFL in the whorl-like nerve further decreased in grade 3 DED compared with grade 1. Significant nerve fiber loss was observed in aqueous-deficient DED compared with evaporative DED (<em>P</em> &lt; 0.05). Whorl-like nerve metrics correlated with ocular discomfort, tear film break-up time, tear secretion, and corneal fluorescein staining, respectively (<em>P</em> &lt; 0.05). Furthermore, merging parameters of whorl-like and linear nerve showed an area under the curve value of 0.910 in diagnosing DED.</div></div><div><h3>Conclusions</h3><div>Geometrical parameters of IW could potentially allow optimization of the staging of DED. Reliable and objective measurements for the whorl-like cornea nerve might facilitate patient stratification and diagnosis of DED.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100669"},"PeriodicalIF":3.2,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11787521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143082487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of Deep Learning–Based Automatic Retinal Layer Segmentation Algorithms for Age-Related Macular Degeneration with 2 Spectral-Domain OCT Devices
IF 3.2
Ophthalmology science Pub Date : 2024-12-04 DOI: 10.1016/j.xops.2024.100670
Souvick Mukherjee PhD , Tharindu De Silva PhD , Cameron Duic BS , Gopal Jayakar BS , Tiarnan D.L. Keenan BM BCh, PhD , Alisa T. Thavikulwat MD , Emily Chew MD , Catherine Cukras MD, PhD
{"title":"Validation of Deep Learning–Based Automatic Retinal Layer Segmentation Algorithms for Age-Related Macular Degeneration with 2 Spectral-Domain OCT Devices","authors":"Souvick Mukherjee PhD ,&nbsp;Tharindu De Silva PhD ,&nbsp;Cameron Duic BS ,&nbsp;Gopal Jayakar BS ,&nbsp;Tiarnan D.L. Keenan BM BCh, PhD ,&nbsp;Alisa T. Thavikulwat MD ,&nbsp;Emily Chew MD ,&nbsp;Catherine Cukras MD, PhD","doi":"10.1016/j.xops.2024.100670","DOIUrl":"10.1016/j.xops.2024.100670","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Purpose&lt;/h3&gt;&lt;div&gt;Segmentations of retinal layers in spectral-domain OCT (SD-OCT) images serve as a crucial tool for identifying and analyzing the progression of various retinal diseases, encompassing a broad spectrum of abnormalities associated with age-related macular degeneration (AMD). The training of deep learning algorithms necessitates well-defined ground truth labels, validated by experts, to delineate boundaries accurately. However, this resource-intensive process has constrained the widespread application of such algorithms across diverse OCT devices. This work validates deep learning image segmentation models across multiple OCT devices by testing robustness in generating clinically relevant metrics.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Design&lt;/h3&gt;&lt;div&gt;Prospective comparative study.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Participants&lt;/h3&gt;&lt;div&gt;Adults &gt;50 years of age with no AMD to advanced AMD, as defined in the Age-Related Eye Disease Study, in ≥1 eye, were enrolled. Four hundred two SD-OCT scans were used in this study.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Methods&lt;/h3&gt;&lt;div&gt;We evaluate 2 separate state-of-the-art segmentation algorithms through a training process using images obtained from 1 OCT device (Heidelberg-Spectralis) and subsequent testing using images acquired from 2 OCT devices (Heidelberg-Spectralis and Zeiss-Cirrus). This assessment is performed on a dataset that encompasses a range of retinal pathologies, spanning from disease-free conditions to severe forms of AMD, with a focus on evaluating the device independence of the algorithms.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Main Outcome Measures&lt;/h3&gt;&lt;div&gt;Performance metrics (including mean squared error, mean absolute error [MAE], and Dice coefficients) for the segmentations of the internal limiting membrane (ILM), retinal pigment epithelium (RPE), and RPE to Bruch’s membrane region, along with en face thickness maps, volumetric estimations (in mm&lt;sup&gt;3&lt;/sup&gt;). Violin plots and Bland–Altman plots comparing predictions against ground truth are also presented.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;The UNet and DeepLabv3, trained on Spectralis B-scans, demonstrate clinically useful outcomes when applied to Cirrus test B-scans. Review of the Cirrus test data by 2 independent annotators revealed that the aggregated MAE in pixels for ILM was 1.82 ± 0.24 (equivalent to 7.0 ± 0.9 μm) and for RPE was 2.46 ± 0.66 (9.5 ± 2.6 μm). Additionally, the Dice similarity coefficient for the RPE drusen complex region, comparing predictions to ground truth, reached 0.87 ± 0.01.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusions&lt;/h3&gt;&lt;div&gt;In the pursuit of task-specific goals such as retinal layer segmentation, a segmentation network has the capacity to acquire domain-independent features from a large training dataset. This enables the utilization of the network to execute tasks in domains where ground truth is hard to generate.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Financial Disclosure(s)&lt;/h3&gt;&lt;div&gt;Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end ","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 3","pages":"Article 100670"},"PeriodicalIF":3.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Could Infectious Agents Play a Role in the Onset of Age-related Macular Degeneration? A Scoping Review
IF 3.2
Ophthalmology science Pub Date : 2024-11-30 DOI: 10.1016/j.xops.2024.100668
Petra P. Larsen MD, PhD , Virginie Dinet PhD , Cécile Delcourt PhD , Catherine Helmer MD, PhD , Morgane Linard MD, PhD
{"title":"Could Infectious Agents Play a Role in the Onset of Age-related Macular Degeneration? A Scoping Review","authors":"Petra P. Larsen MD, PhD ,&nbsp;Virginie Dinet PhD ,&nbsp;Cécile Delcourt PhD ,&nbsp;Catherine Helmer MD, PhD ,&nbsp;Morgane Linard MD, PhD","doi":"10.1016/j.xops.2024.100668","DOIUrl":"10.1016/j.xops.2024.100668","url":null,"abstract":"<div><h3>Topic</h3><div>This scoping review aims to summarize the current state of knowledge on the potential involvement of infections in age-related macular degeneration (AMD).</div></div><div><h3>Clinical relevance</h3><div>Age-related macular degeneration is a multifactorial disease and the leading cause of vision loss among older adults in developed countries. Clarifying whether certain infections participate in its onset or progression seems essential, given the potential implications for treatment and prevention.</div></div><div><h3>Methods</h3><div>Using the PubMed database, we searched for articles in English, published until June 1, 2023, whose title and/or abstract contained terms related to AMD and infections. All types of study design, infectious agents, AMD diagnostic methods, and AMD stages were considered. Articles dealing with the oral and gut microbiota were not included but we provide a brief summary of high-quality literature reviews recently published on the subject.</div></div><div><h3>Results</h3><div>Two investigators independently screened the 868 articles obtained by our algorithm and the reference lists of selected studies. In total, 40 articles were included, among which 30 on human data, 9 animal studies, 6 in vitro experiments, and 1 hypothesis paper (sometimes with several data types in the same article). Of these, 27 studies were published after 2010, highlighting a growing interest in recent years. A wide range of infectious agents has been investigated, including various microbiota (nasal, pharyngeal), 8 bacteria, 6 viral species, and 1 yeast. Among them, most have been investigated anecdotally. Only <em>Chlamydia pneumoniae</em>, <em>Cytomegalovirus</em>, and hepatitis B virus received more attention with 17, 6, and 4 studies, respectively. Numerous potential pathophysiological mechanisms have been discussed, including (1) an indirect role of infectious agents (i.e. a role of infections located distant from the eye, mainly through their interactions with the immune system) and (2) a direct role of some infectious agents implying potential infection of various cells types within AMD-related tissues.</div></div><div><h3>Conclusions</h3><div>Overall, this review highlights the diversity of possible interactions between infectious agents and AMD and suggests avenues of research to enrich the data currently available, which provide an insufficient level of evidence to conclude whether or not infectious agents are involved in this pathology.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100668"},"PeriodicalIF":3.2,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143169440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glaucoma Detection and Feature Identification via GPT-4V Fundus Image Analysis
IF 3.2
Ophthalmology science Pub Date : 2024-11-29 DOI: 10.1016/j.xops.2024.100667
Jalil Jalili PhD , Anuwat Jiravarnsirikul MD , Christopher Bowd PhD , Benton Chuter MD , Akram Belghith PhD , Michael H. Goldbaum MD , Sally L. Baxter MD , Robert N. Weinreb MD , Linda M. Zangwill PhD , Mark Christopher PhD
{"title":"Glaucoma Detection and Feature Identification via GPT-4V Fundus Image Analysis","authors":"Jalil Jalili PhD ,&nbsp;Anuwat Jiravarnsirikul MD ,&nbsp;Christopher Bowd PhD ,&nbsp;Benton Chuter MD ,&nbsp;Akram Belghith PhD ,&nbsp;Michael H. Goldbaum MD ,&nbsp;Sally L. Baxter MD ,&nbsp;Robert N. Weinreb MD ,&nbsp;Linda M. Zangwill PhD ,&nbsp;Mark Christopher PhD","doi":"10.1016/j.xops.2024.100667","DOIUrl":"10.1016/j.xops.2024.100667","url":null,"abstract":"<div><h3>Purpose</h3><div>The aim is to assess GPT-4V's (OpenAI) diagnostic accuracy and its capability to identify glaucoma-related features compared to expert evaluations.</div></div><div><h3>Design</h3><div>Evaluation of multimodal large language models for reviewing fundus images in glaucoma.</div></div><div><h3>Subjects</h3><div>A total of 300 fundus images from 3 public datasets (ACRIMA, ORIGA, and RIM-One v3) that included 139 glaucomatous and 161 nonglaucomatous cases were analyzed.</div></div><div><h3>Methods</h3><div>Preprocessing ensured each image was centered on the optic disc. GPT-4's vision-preview model (GPT-4V) assessed each image for various glaucoma-related criteria: image quality, image gradability, cup-to-disc ratio, peripapillary atrophy, disc hemorrhages, rim thinning (by quadrant and clock hour), glaucoma status, and estimated probability of glaucoma. Each image was analyzed twice by GPT-4V to evaluate consistency in its predictions. Two expert graders independently evaluated the same images using identical criteria. Comparisons between GPT-4V's assessments, expert evaluations, and dataset labels were made to determine accuracy, sensitivity, specificity, and Cohen kappa.</div></div><div><h3>Main Outcome Measures</h3><div>The main parameters measured were the accuracy, sensitivity, specificity, and Cohen kappa of GPT-4V in detecting glaucoma compared with expert evaluations.</div></div><div><h3>Results</h3><div>GPT-4V successfully provided glaucoma assessments for all 300 fundus images across the datasets, although approximately 35% required multiple prompt submissions. GPT-4V's overall accuracy in glaucoma detection was slightly lower (0.68, 0.70, and 0.81, respectively) than that of expert graders (0.78, 0.80, and 0.88, for expert grader 1 and 0.72, 0.78, and 0.87, for expert grader 2, respectively), across the ACRIMA, ORIGA, and RIM-ONE datasets. In Glaucoma detection, GPT-4V showed variable agreement by dataset and expert graders, with Cohen kappa values ranging from 0.08 to 0.72. In terms of feature detection, GPT-4V demonstrated high consistency (repeatability) in image gradability, with an agreement accuracy of ≥89% and substantial agreement in rim thinning and cup-to-disc ratio assessments, although kappas were generally lower than expert-to-expert agreement.</div></div><div><h3>Conclusions</h3><div>GPT-4V shows promise as a tool in glaucoma screening and detection through fundus image analysis, demonstrating generally high agreement with expert evaluations of key diagnostic features, although agreement did vary substantially across datasets.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100667"},"PeriodicalIF":3.2,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11773068/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Deep Learning for Differentiating Bacterial and Fungal Keratitis Using Prospective Representative Data
IF 3.2
Ophthalmology science Pub Date : 2024-11-29 DOI: 10.1016/j.xops.2024.100665
N.V. Prajna MD , Jad Assaf MD , Nisha R. Acharya MD, MS , Jennifer Rose-Nussbaumer MD , Thomas M. Lietman MD , J. Peter Campbell MD, MPH , Jeremy D. Keenan MD, MPH , Xubo Song PhD , Travis K. Redd MD, MPH
{"title":"Multimodal Deep Learning for Differentiating Bacterial and Fungal Keratitis Using Prospective Representative Data","authors":"N.V. Prajna MD ,&nbsp;Jad Assaf MD ,&nbsp;Nisha R. Acharya MD, MS ,&nbsp;Jennifer Rose-Nussbaumer MD ,&nbsp;Thomas M. Lietman MD ,&nbsp;J. Peter Campbell MD, MPH ,&nbsp;Jeremy D. Keenan MD, MPH ,&nbsp;Xubo Song PhD ,&nbsp;Travis K. Redd MD, MPH","doi":"10.1016/j.xops.2024.100665","DOIUrl":"10.1016/j.xops.2024.100665","url":null,"abstract":"<div><h3>Objective</h3><div>This study develops and evaluates multimodal machine learning models for differentiating bacterial and fungal keratitis using a prospective representative dataset from South India.</div></div><div><h3>Design</h3><div>Machine learning classifier training and validation study.</div></div><div><h3>Participants</h3><div>Five hundred ninety-nine subjects diagnosed with acute infectious keratitis at Aravind Eye Hospital in Madurai, India.</div></div><div><h3>Methods</h3><div>We developed and compared 3 prediction models to distinguish bacterial and fungal keratitis using a prospective, consecutively-collected, representative dataset gathered over a full calendar year (the MADURAI dataset). These models included a clinical data model, a computer vision model using the EfficientNet architecture, and a multimodal model combining both imaging and clinical data. We partitioned the MADURAI dataset into 70% train/validation and 30% test sets. Model training was performed with fivefold cross-validation. We also compared the performance of the MADURAI-trained computer vision model against a model with identical architecture but trained on a preexisting dataset collated from multiple prior bacterial and fungal keratitis randomized clinical trials (RCTs) (the RCT-trained computer vision model).</div></div><div><h3>Main Outcome Measures</h3><div>The primary evaluation metric was the area under the precision-recall curve (AUPRC). Secondary metrics included area under the receiver operating characteristic curve (AUROC), accuracy, and F1 score.</div></div><div><h3>Results</h3><div>The MADURAI-trained computer vision model outperformed the clinical data model and the RCT-trained computer vision model on the hold-out test set, with an AUPRC 0.94 (95% confidence interval: 0.92–0.96), AUROC 0.81 (0.76–0.85), accuracy 77%, and F1 score 0.85. The multimodal model did not substantially improve performance compared with the computer vision model.</div></div><div><h3>Conclusions</h3><div>The best-performing machine learning classifier for infectious keratitis was a computer vision model trained using the MADURAI dataset. These findings suggest that image-based deep learning could significantly enhance diagnostic capabilities for infectious keratitis and emphasize the importance of using prospective, consecutively-collected, representative data for machine learning model training and evaluation.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100665"},"PeriodicalIF":3.2,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758206/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Associations of Retinal Microvascular Density and Fractal Dimension with Glaucoma: A Prospective Study from UK Biobank
IF 3.2
Ophthalmology science Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100661
Qi Chen MD , Suyu Miao MD , Yuzhe Jiang MD , Danli Shi MD, PhD , Weiyun You MD , Lin Liu MD, PhD , Mayinuer Yusufu MTI , Yufan Chen MD , Ruobing Wang MD, PhD
{"title":"Associations of Retinal Microvascular Density and Fractal Dimension with Glaucoma: A Prospective Study from UK Biobank","authors":"Qi Chen MD ,&nbsp;Suyu Miao MD ,&nbsp;Yuzhe Jiang MD ,&nbsp;Danli Shi MD, PhD ,&nbsp;Weiyun You MD ,&nbsp;Lin Liu MD, PhD ,&nbsp;Mayinuer Yusufu MTI ,&nbsp;Yufan Chen MD ,&nbsp;Ruobing Wang MD, PhD","doi":"10.1016/j.xops.2024.100661","DOIUrl":"10.1016/j.xops.2024.100661","url":null,"abstract":"<div><h3>Objective</h3><div>To explore the association between retinal microvascular parameters and glaucoma.</div></div><div><h3>Design</h3><div>Prospective study.</div></div><div><h3>Subjects</h3><div>The UK Biobank subjects with fundus images and without a history of glaucoma.</div></div><div><h3>Methods</h3><div>We employed the Retina-based Microvascular Health Assessment System to utilize the noninvasive nature of fundus photography and quantify retinal microvascular parameters including retinal vascular skeleton density (VSD) and fractal dimension (FD). We also utilized propensity score matching (PSM) to pair individuals with glaucoma and healthy controls. Propensity score matching was implemented via a logistic regression model with a caliper of 0.1 and a matching ratio of 1:4 no replacements. We conducted univariable Cox regression analyses to study the association between retinal microvascular parameters and incident glaucoma, in both continuous and quartile forms.</div></div><div><h3>Main Outcome Measure</h3><div>Vascular skeleton density, FD, and glaucoma.</div></div><div><h3>Results</h3><div>In a study of 41 632 participants without prior glaucoma, 482 cases of glaucoma were recorded during a median follow-up of 11.0 years. In the Cox proportional hazards regression model post-PSM, we found that incident glaucoma has significant negative associations with arteriolar VSD (hazard ratio [HR] = 0.24, 95% confidence interval [CI] 0.11–0.52, <em>P</em> &lt; 0.001), venular VSD (HR = 0.34, 95% CI 0.15–0.74, <em>P</em> = 0.007), arteriolar FD (HR = 0.24, 95% CI 0.10–0.60, <em>P</em> = 0.002), and venular FD (HR = 0.31, 95% CI 0.12–0.85, <em>P</em> = 0.022). Subgroup analysis using covariates revealed that individuals aged ≥60 years, nonsmokers, moderate alcohol consumers, and those with hypertension and myopia exhibited <em>P</em> values &lt;0.05 consistently prematching and postmatching, differing from other subgroups within this covariate.</div></div><div><h3>Conclusions</h3><div>Our study found that reduced retinal VSD and lower FD are linked to elevated glaucoma risk.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100661"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754513/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EyeLiner
IF 3.2
Ophthalmology science Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100664
Yoga Advaith Veturi MSc , Steve McNamara OD , Scott Kinder MS, Christopher William Clark MS, Upasana Thakuria MS, Benjamin Bearce MS, Niranjan Manoharan MD, Naresh Mandava MD, Malik Y. Kahook MD, Praveer Singh PhD, Jayashree Kalpathy-Cramer PhD
{"title":"EyeLiner","authors":"Yoga Advaith Veturi MSc ,&nbsp;Steve McNamara OD ,&nbsp;Scott Kinder MS,&nbsp;Christopher William Clark MS,&nbsp;Upasana Thakuria MS,&nbsp;Benjamin Bearce MS,&nbsp;Niranjan Manoharan MD,&nbsp;Naresh Mandava MD,&nbsp;Malik Y. Kahook MD,&nbsp;Praveer Singh PhD,&nbsp;Jayashree Kalpathy-Cramer PhD","doi":"10.1016/j.xops.2024.100664","DOIUrl":"10.1016/j.xops.2024.100664","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Objective&lt;/h3&gt;&lt;div&gt;Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, “EyeLiner,” for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Design&lt;/h3&gt;&lt;div&gt;EyeLiner registers a “moving” image to a “fixed” image using a DL-based keypoint matching algorithm.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Participants&lt;/h3&gt;&lt;div&gt;We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS).&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Methods&lt;/h3&gt;&lt;div&gt;Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Main Outcome Measures&lt;/h3&gt;&lt;div&gt;We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUC&lt;sub&gt;FIRE&lt;/sub&gt; = 0.76, AUC&lt;sub&gt;CORIS&lt;/sub&gt; = 0.83, AUC&lt;sub&gt;SIGF&lt;/sub&gt; = 0.74).&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusions&lt;/h3&gt;&lt;div&gt;Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Financial Disclosure(s)&lt;/h3&gt;&lt;div&gt;Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at th","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100664"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11773051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos
IF 3.2
Ophthalmology science Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100659
Samantha Min Er Yew BSc , Xiaofeng Lei MSc , Yibing Chen BEng , Jocelyn Hui Lin Goh BEng , Krithi Pushpanathan MSc , Can Can Xue MD, PhD , Ya Xing Wang MD, PhD , Jost B. Jonas MD, PhD , Charumathi Sabanayagam MD, PhD , Victor Teck Chang Koh MBBS, MMed , Xinxing Xu PhD , Yong Liu PhD , Ching-Yu Cheng MD, PhD , Yih-Chung Tham PhD
{"title":"Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos","authors":"Samantha Min Er Yew BSc ,&nbsp;Xiaofeng Lei MSc ,&nbsp;Yibing Chen BEng ,&nbsp;Jocelyn Hui Lin Goh BEng ,&nbsp;Krithi Pushpanathan MSc ,&nbsp;Can Can Xue MD, PhD ,&nbsp;Ya Xing Wang MD, PhD ,&nbsp;Jost B. Jonas MD, PhD ,&nbsp;Charumathi Sabanayagam MD, PhD ,&nbsp;Victor Teck Chang Koh MBBS, MMed ,&nbsp;Xinxing Xu PhD ,&nbsp;Yong Liu PhD ,&nbsp;Ching-Yu Cheng MD, PhD ,&nbsp;Yih-Chung Tham PhD","doi":"10.1016/j.xops.2024.100659","DOIUrl":"10.1016/j.xops.2024.100659","url":null,"abstract":"<div><h3>Purpose</h3><div>Recent studies utilized ocular images and deep learning (DL) to predict refractive error and yielded notable results. However, most studies did not address biases from imbalanced datasets or conduct external validations. To address these gaps, this study aimed to integrate the deep imbalanced regression (DIR) technique into ResNet and Vision Transformer models to predict refractive error from retinal photographs.</div></div><div><h3>Design</h3><div>Retrospective study.</div></div><div><h3>Subjects</h3><div>We developed the DL models using up to 103 865 images from the Singapore Epidemiology of Eye Diseases Study and the United Kingdom Biobank, with internal testing on up to 8067 images. External testing was conducted on 7043 images from the Singapore Prospective Study and 5539 images from the Beijing Eye Study. Retinal images and corresponding refractive error data were extracted.</div></div><div><h3>Methods</h3><div>This retrospective study developed regression-based models, including ResNet34 with DIR, and SwinV2 (Swin Transformer) with DIR, incorporating Label Distribution Smoothing and Feature Distribution Smoothing. These models were compared against their baseline versions, ResNet34 and SwinV2, in predicting spherical and spherical equivalent (SE) power.</div></div><div><h3>Main Outcome Measures</h3><div>Mean absolute error (MAE) and coefficient of determination were used to evaluate the models’ performances. The Wilcoxon signed-rank test was performed to assess statistical significance between DIR-integrated models and their baseline versions.</div></div><div><h3>Results</h3><div>For prediction of the spherical power, ResNet34 with DIR (MAE: 0.84D) and SwinV2 with DIR (MAE: 0.77D) significantly outperformed their baseline—ResNet34 (MAE: 0.88D; <em>P</em> &lt; 0.001) and SwinV2 (MAE: 0.87D; <em>P</em> &lt; 0.001) in internal test. For prediction of the SE power, ResNet34 with DIR (MAE: 0.78D) and SwinV2 with DIR (MAE: 0.75D) consistently significantly outperformed its baseline—ResNet34 (MAE: 0.81D; <em>P</em> &lt; 0.001) and SwinV2 (MAE: 0.78D; <em>P</em> &lt; 0.05) in internal test. Similar trends were observed in external test sets for both spherical and SE power prediction.</div></div><div><h3>Conclusions</h3><div>Deep imbalanced regressed–integrated DL models showed potential in addressing data imbalances and improving the prediction of refractive error. These findings highlight the potential utility of combining DL models with retinal imaging for opportunistic screening of refractive errors, particularly in settings where retinal cameras are already in use.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100659"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143169380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信