Ophthalmology science最新文献

筛选
英文 中文
Validation of Deep Learning–Based Automatic Retinal Layer Segmentation Algorithms for Age-Related Macular Degeneration with 2 Spectral-Domain OCT Devices
IF 3.2
Ophthalmology science Pub Date : 2024-12-04 DOI: 10.1016/j.xops.2024.100670
Souvick Mukherjee PhD , Tharindu De Silva PhD , Cameron Duic BS , Gopal Jayakar BS , Tiarnan D.L. Keenan BM BCh, PhD , Alisa T. Thavikulwat MD , Emily Chew MD , Catherine Cukras MD, PhD
{"title":"Validation of Deep Learning–Based Automatic Retinal Layer Segmentation Algorithms for Age-Related Macular Degeneration with 2 Spectral-Domain OCT Devices","authors":"Souvick Mukherjee PhD , Tharindu De Silva PhD , Cameron Duic BS , Gopal Jayakar BS , Tiarnan D.L. Keenan BM BCh, PhD , Alisa T. Thavikulwat MD , Emily Chew MD , Catherine Cukras MD, PhD","doi":"10.1016/j.xops.2024.100670","DOIUrl":"10.1016/j.xops.2024.100670","url":null,"abstract":"<div><h3>Purpose</h3><div>Segmentations of retinal layers in spectral-domain OCT (SD-OCT) images serve as a crucial tool for identifying and analyzing the progression of various retinal diseases, encompassing a broad spectrum of abnormalities associated with age-related macular degeneration (AMD). The training of deep learning algorithms necessitates well-defined ground truth labels, validated by experts, to delineate boundaries accurately. However, this resource-intensive process has constrained the widespread application of such algorithms across diverse OCT devices. This work validates deep learning image segmentation models across multiple OCT devices by testing robustness in generating clinically relevant metrics.</div></div><div><h3>Design</h3><div>Prospective comparative study.</div></div><div><h3>Participants</h3><div>Adults >50 years of age with no AMD to advanced AMD, as defined in the Age-Related Eye Disease Study, in ≥1 eye, were enrolled. Four hundred two SD-OCT scans were used in this study.</div></div><div><h3>Methods</h3><div>We evaluate 2 separate state-of-the-art segmentation algorithms through a training process using images obtained from 1 OCT device (Heidelberg-Spectralis) and subsequent testing using images acquired from 2 OCT devices (Heidelberg-Spectralis and Zeiss-Cirrus). This assessment is performed on a dataset that encompasses a range of retinal pathologies, spanning from disease-free conditions to severe forms of AMD, with a focus on evaluating the device independence of the algorithms.</div></div><div><h3>Main Outcome Measures</h3><div>Performance metrics (including mean squared error, mean absolute error [MAE], and Dice coefficients) for the segmentations of the internal limiting membrane (ILM), retinal pigment epithelium (RPE), and RPE to Bruch’s membrane region, along with en face thickness maps, volumetric estimations (in mm<sup>3</sup>). Violin plots and Bland–Altman plots comparing predictions against ground truth are also presented.</div></div><div><h3>Results</h3><div>The UNet and DeepLabv3, trained on Spectralis B-scans, demonstrate clinically useful outcomes when applied to Cirrus test B-scans. Review of the Cirrus test data by 2 independent annotators revealed that the aggregated MAE in pixels for ILM was 1.82 ± 0.24 (equivalent to 7.0 ± 0.9 μm) and for RPE was 2.46 ± 0.66 (9.5 ± 2.6 μm). Additionally, the Dice similarity coefficient for the RPE drusen complex region, comparing predictions to ground truth, reached 0.87 ± 0.01.</div></div><div><h3>Conclusions</h3><div>In the pursuit of task-specific goals such as retinal layer segmentation, a segmentation network has the capacity to acquire domain-independent features from a large training dataset. This enables the utilization of the network to execute tasks in domains where ground truth is hard to generate.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end ","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 3","pages":"Article 100670"},"PeriodicalIF":3.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Could Infectious Agents Play a Role in the Onset of Age-related Macular Degeneration? A Scoping Review
IF 3.2
Ophthalmology science Pub Date : 2024-11-30 DOI: 10.1016/j.xops.2024.100668
Petra P. Larsen MD, PhD , Virginie Dinet PhD , Cécile Delcourt PhD , Catherine Helmer MD, PhD , Morgane Linard MD, PhD
{"title":"Could Infectious Agents Play a Role in the Onset of Age-related Macular Degeneration? A Scoping Review","authors":"Petra P. Larsen MD, PhD ,&nbsp;Virginie Dinet PhD ,&nbsp;Cécile Delcourt PhD ,&nbsp;Catherine Helmer MD, PhD ,&nbsp;Morgane Linard MD, PhD","doi":"10.1016/j.xops.2024.100668","DOIUrl":"10.1016/j.xops.2024.100668","url":null,"abstract":"<div><h3>Topic</h3><div>This scoping review aims to summarize the current state of knowledge on the potential involvement of infections in age-related macular degeneration (AMD).</div></div><div><h3>Clinical relevance</h3><div>Age-related macular degeneration is a multifactorial disease and the leading cause of vision loss among older adults in developed countries. Clarifying whether certain infections participate in its onset or progression seems essential, given the potential implications for treatment and prevention.</div></div><div><h3>Methods</h3><div>Using the PubMed database, we searched for articles in English, published until June 1, 2023, whose title and/or abstract contained terms related to AMD and infections. All types of study design, infectious agents, AMD diagnostic methods, and AMD stages were considered. Articles dealing with the oral and gut microbiota were not included but we provide a brief summary of high-quality literature reviews recently published on the subject.</div></div><div><h3>Results</h3><div>Two investigators independently screened the 868 articles obtained by our algorithm and the reference lists of selected studies. In total, 40 articles were included, among which 30 on human data, 9 animal studies, 6 in vitro experiments, and 1 hypothesis paper (sometimes with several data types in the same article). Of these, 27 studies were published after 2010, highlighting a growing interest in recent years. A wide range of infectious agents has been investigated, including various microbiota (nasal, pharyngeal), 8 bacteria, 6 viral species, and 1 yeast. Among them, most have been investigated anecdotally. Only <em>Chlamydia pneumoniae</em>, <em>Cytomegalovirus</em>, and hepatitis B virus received more attention with 17, 6, and 4 studies, respectively. Numerous potential pathophysiological mechanisms have been discussed, including (1) an indirect role of infectious agents (i.e. a role of infections located distant from the eye, mainly through their interactions with the immune system) and (2) a direct role of some infectious agents implying potential infection of various cells types within AMD-related tissues.</div></div><div><h3>Conclusions</h3><div>Overall, this review highlights the diversity of possible interactions between infectious agents and AMD and suggests avenues of research to enrich the data currently available, which provide an insufficient level of evidence to conclude whether or not infectious agents are involved in this pathology.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100668"},"PeriodicalIF":3.2,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143169440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glaucoma Detection and Feature Identification via GPT-4V Fundus Image Analysis
IF 3.2
Ophthalmology science Pub Date : 2024-11-29 DOI: 10.1016/j.xops.2024.100667
Jalil Jalili PhD , Anuwat Jiravarnsirikul MD , Christopher Bowd PhD , Benton Chuter MD , Akram Belghith PhD , Michael H. Goldbaum MD , Sally L. Baxter MD , Robert N. Weinreb MD , Linda M. Zangwill PhD , Mark Christopher PhD
{"title":"Glaucoma Detection and Feature Identification via GPT-4V Fundus Image Analysis","authors":"Jalil Jalili PhD ,&nbsp;Anuwat Jiravarnsirikul MD ,&nbsp;Christopher Bowd PhD ,&nbsp;Benton Chuter MD ,&nbsp;Akram Belghith PhD ,&nbsp;Michael H. Goldbaum MD ,&nbsp;Sally L. Baxter MD ,&nbsp;Robert N. Weinreb MD ,&nbsp;Linda M. Zangwill PhD ,&nbsp;Mark Christopher PhD","doi":"10.1016/j.xops.2024.100667","DOIUrl":"10.1016/j.xops.2024.100667","url":null,"abstract":"<div><h3>Purpose</h3><div>The aim is to assess GPT-4V's (OpenAI) diagnostic accuracy and its capability to identify glaucoma-related features compared to expert evaluations.</div></div><div><h3>Design</h3><div>Evaluation of multimodal large language models for reviewing fundus images in glaucoma.</div></div><div><h3>Subjects</h3><div>A total of 300 fundus images from 3 public datasets (ACRIMA, ORIGA, and RIM-One v3) that included 139 glaucomatous and 161 nonglaucomatous cases were analyzed.</div></div><div><h3>Methods</h3><div>Preprocessing ensured each image was centered on the optic disc. GPT-4's vision-preview model (GPT-4V) assessed each image for various glaucoma-related criteria: image quality, image gradability, cup-to-disc ratio, peripapillary atrophy, disc hemorrhages, rim thinning (by quadrant and clock hour), glaucoma status, and estimated probability of glaucoma. Each image was analyzed twice by GPT-4V to evaluate consistency in its predictions. Two expert graders independently evaluated the same images using identical criteria. Comparisons between GPT-4V's assessments, expert evaluations, and dataset labels were made to determine accuracy, sensitivity, specificity, and Cohen kappa.</div></div><div><h3>Main Outcome Measures</h3><div>The main parameters measured were the accuracy, sensitivity, specificity, and Cohen kappa of GPT-4V in detecting glaucoma compared with expert evaluations.</div></div><div><h3>Results</h3><div>GPT-4V successfully provided glaucoma assessments for all 300 fundus images across the datasets, although approximately 35% required multiple prompt submissions. GPT-4V's overall accuracy in glaucoma detection was slightly lower (0.68, 0.70, and 0.81, respectively) than that of expert graders (0.78, 0.80, and 0.88, for expert grader 1 and 0.72, 0.78, and 0.87, for expert grader 2, respectively), across the ACRIMA, ORIGA, and RIM-ONE datasets. In Glaucoma detection, GPT-4V showed variable agreement by dataset and expert graders, with Cohen kappa values ranging from 0.08 to 0.72. In terms of feature detection, GPT-4V demonstrated high consistency (repeatability) in image gradability, with an agreement accuracy of ≥89% and substantial agreement in rim thinning and cup-to-disc ratio assessments, although kappas were generally lower than expert-to-expert agreement.</div></div><div><h3>Conclusions</h3><div>GPT-4V shows promise as a tool in glaucoma screening and detection through fundus image analysis, demonstrating generally high agreement with expert evaluations of key diagnostic features, although agreement did vary substantially across datasets.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100667"},"PeriodicalIF":3.2,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11773068/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Deep Learning for Differentiating Bacterial and Fungal Keratitis Using Prospective Representative Data
IF 3.2
Ophthalmology science Pub Date : 2024-11-29 DOI: 10.1016/j.xops.2024.100665
N.V. Prajna MD , Jad Assaf MD , Nisha R. Acharya MD, MS , Jennifer Rose-Nussbaumer MD , Thomas M. Lietman MD , J. Peter Campbell MD, MPH , Jeremy D. Keenan MD, MPH , Xubo Song PhD , Travis K. Redd MD, MPH
{"title":"Multimodal Deep Learning for Differentiating Bacterial and Fungal Keratitis Using Prospective Representative Data","authors":"N.V. Prajna MD ,&nbsp;Jad Assaf MD ,&nbsp;Nisha R. Acharya MD, MS ,&nbsp;Jennifer Rose-Nussbaumer MD ,&nbsp;Thomas M. Lietman MD ,&nbsp;J. Peter Campbell MD, MPH ,&nbsp;Jeremy D. Keenan MD, MPH ,&nbsp;Xubo Song PhD ,&nbsp;Travis K. Redd MD, MPH","doi":"10.1016/j.xops.2024.100665","DOIUrl":"10.1016/j.xops.2024.100665","url":null,"abstract":"<div><h3>Objective</h3><div>This study develops and evaluates multimodal machine learning models for differentiating bacterial and fungal keratitis using a prospective representative dataset from South India.</div></div><div><h3>Design</h3><div>Machine learning classifier training and validation study.</div></div><div><h3>Participants</h3><div>Five hundred ninety-nine subjects diagnosed with acute infectious keratitis at Aravind Eye Hospital in Madurai, India.</div></div><div><h3>Methods</h3><div>We developed and compared 3 prediction models to distinguish bacterial and fungal keratitis using a prospective, consecutively-collected, representative dataset gathered over a full calendar year (the MADURAI dataset). These models included a clinical data model, a computer vision model using the EfficientNet architecture, and a multimodal model combining both imaging and clinical data. We partitioned the MADURAI dataset into 70% train/validation and 30% test sets. Model training was performed with fivefold cross-validation. We also compared the performance of the MADURAI-trained computer vision model against a model with identical architecture but trained on a preexisting dataset collated from multiple prior bacterial and fungal keratitis randomized clinical trials (RCTs) (the RCT-trained computer vision model).</div></div><div><h3>Main Outcome Measures</h3><div>The primary evaluation metric was the area under the precision-recall curve (AUPRC). Secondary metrics included area under the receiver operating characteristic curve (AUROC), accuracy, and F1 score.</div></div><div><h3>Results</h3><div>The MADURAI-trained computer vision model outperformed the clinical data model and the RCT-trained computer vision model on the hold-out test set, with an AUPRC 0.94 (95% confidence interval: 0.92–0.96), AUROC 0.81 (0.76–0.85), accuracy 77%, and F1 score 0.85. The multimodal model did not substantially improve performance compared with the computer vision model.</div></div><div><h3>Conclusions</h3><div>The best-performing machine learning classifier for infectious keratitis was a computer vision model trained using the MADURAI dataset. These findings suggest that image-based deep learning could significantly enhance diagnostic capabilities for infectious keratitis and emphasize the importance of using prospective, consecutively-collected, representative data for machine learning model training and evaluation.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100665"},"PeriodicalIF":3.2,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758206/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Associations of Retinal Microvascular Density and Fractal Dimension with Glaucoma: A Prospective Study from UK Biobank
IF 3.2
Ophthalmology science Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100661
Qi Chen MD , Suyu Miao MD , Yuzhe Jiang MD , Danli Shi MD, PhD , Weiyun You MD , Lin Liu MD, PhD , Mayinuer Yusufu MTI , Yufan Chen MD , Ruobing Wang MD, PhD
{"title":"Associations of Retinal Microvascular Density and Fractal Dimension with Glaucoma: A Prospective Study from UK Biobank","authors":"Qi Chen MD ,&nbsp;Suyu Miao MD ,&nbsp;Yuzhe Jiang MD ,&nbsp;Danli Shi MD, PhD ,&nbsp;Weiyun You MD ,&nbsp;Lin Liu MD, PhD ,&nbsp;Mayinuer Yusufu MTI ,&nbsp;Yufan Chen MD ,&nbsp;Ruobing Wang MD, PhD","doi":"10.1016/j.xops.2024.100661","DOIUrl":"10.1016/j.xops.2024.100661","url":null,"abstract":"<div><h3>Objective</h3><div>To explore the association between retinal microvascular parameters and glaucoma.</div></div><div><h3>Design</h3><div>Prospective study.</div></div><div><h3>Subjects</h3><div>The UK Biobank subjects with fundus images and without a history of glaucoma.</div></div><div><h3>Methods</h3><div>We employed the Retina-based Microvascular Health Assessment System to utilize the noninvasive nature of fundus photography and quantify retinal microvascular parameters including retinal vascular skeleton density (VSD) and fractal dimension (FD). We also utilized propensity score matching (PSM) to pair individuals with glaucoma and healthy controls. Propensity score matching was implemented via a logistic regression model with a caliper of 0.1 and a matching ratio of 1:4 no replacements. We conducted univariable Cox regression analyses to study the association between retinal microvascular parameters and incident glaucoma, in both continuous and quartile forms.</div></div><div><h3>Main Outcome Measure</h3><div>Vascular skeleton density, FD, and glaucoma.</div></div><div><h3>Results</h3><div>In a study of 41 632 participants without prior glaucoma, 482 cases of glaucoma were recorded during a median follow-up of 11.0 years. In the Cox proportional hazards regression model post-PSM, we found that incident glaucoma has significant negative associations with arteriolar VSD (hazard ratio [HR] = 0.24, 95% confidence interval [CI] 0.11–0.52, <em>P</em> &lt; 0.001), venular VSD (HR = 0.34, 95% CI 0.15–0.74, <em>P</em> = 0.007), arteriolar FD (HR = 0.24, 95% CI 0.10–0.60, <em>P</em> = 0.002), and venular FD (HR = 0.31, 95% CI 0.12–0.85, <em>P</em> = 0.022). Subgroup analysis using covariates revealed that individuals aged ≥60 years, nonsmokers, moderate alcohol consumers, and those with hypertension and myopia exhibited <em>P</em> values &lt;0.05 consistently prematching and postmatching, differing from other subgroups within this covariate.</div></div><div><h3>Conclusions</h3><div>Our study found that reduced retinal VSD and lower FD are linked to elevated glaucoma risk.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100661"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754513/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EyeLiner
IF 3.2
Ophthalmology science Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100664
Yoga Advaith Veturi MSc , Steve McNamara OD , Scott Kinder MS, Christopher William Clark MS, Upasana Thakuria MS, Benjamin Bearce MS, Niranjan Manoharan MD, Naresh Mandava MD, Malik Y. Kahook MD, Praveer Singh PhD, Jayashree Kalpathy-Cramer PhD
{"title":"EyeLiner","authors":"Yoga Advaith Veturi MSc ,&nbsp;Steve McNamara OD ,&nbsp;Scott Kinder MS,&nbsp;Christopher William Clark MS,&nbsp;Upasana Thakuria MS,&nbsp;Benjamin Bearce MS,&nbsp;Niranjan Manoharan MD,&nbsp;Naresh Mandava MD,&nbsp;Malik Y. Kahook MD,&nbsp;Praveer Singh PhD,&nbsp;Jayashree Kalpathy-Cramer PhD","doi":"10.1016/j.xops.2024.100664","DOIUrl":"10.1016/j.xops.2024.100664","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Objective&lt;/h3&gt;&lt;div&gt;Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, “EyeLiner,” for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Design&lt;/h3&gt;&lt;div&gt;EyeLiner registers a “moving” image to a “fixed” image using a DL-based keypoint matching algorithm.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Participants&lt;/h3&gt;&lt;div&gt;We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS).&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Methods&lt;/h3&gt;&lt;div&gt;Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Main Outcome Measures&lt;/h3&gt;&lt;div&gt;We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUC&lt;sub&gt;FIRE&lt;/sub&gt; = 0.76, AUC&lt;sub&gt;CORIS&lt;/sub&gt; = 0.83, AUC&lt;sub&gt;SIGF&lt;/sub&gt; = 0.74).&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusions&lt;/h3&gt;&lt;div&gt;Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Financial Disclosure(s)&lt;/h3&gt;&lt;div&gt;Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at th","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100664"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11773051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos
IF 3.2
Ophthalmology science Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100659
Samantha Min Er Yew BSc , Xiaofeng Lei MSc , Yibing Chen BEng , Jocelyn Hui Lin Goh BEng , Krithi Pushpanathan MSc , Can Can Xue MD, PhD , Ya Xing Wang MD, PhD , Jost B. Jonas MD, PhD , Charumathi Sabanayagam MD, PhD , Victor Teck Chang Koh MBBS, MMed , Xinxing Xu PhD , Yong Liu PhD , Ching-Yu Cheng MD, PhD , Yih-Chung Tham PhD
{"title":"Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos","authors":"Samantha Min Er Yew BSc ,&nbsp;Xiaofeng Lei MSc ,&nbsp;Yibing Chen BEng ,&nbsp;Jocelyn Hui Lin Goh BEng ,&nbsp;Krithi Pushpanathan MSc ,&nbsp;Can Can Xue MD, PhD ,&nbsp;Ya Xing Wang MD, PhD ,&nbsp;Jost B. Jonas MD, PhD ,&nbsp;Charumathi Sabanayagam MD, PhD ,&nbsp;Victor Teck Chang Koh MBBS, MMed ,&nbsp;Xinxing Xu PhD ,&nbsp;Yong Liu PhD ,&nbsp;Ching-Yu Cheng MD, PhD ,&nbsp;Yih-Chung Tham PhD","doi":"10.1016/j.xops.2024.100659","DOIUrl":"10.1016/j.xops.2024.100659","url":null,"abstract":"<div><h3>Purpose</h3><div>Recent studies utilized ocular images and deep learning (DL) to predict refractive error and yielded notable results. However, most studies did not address biases from imbalanced datasets or conduct external validations. To address these gaps, this study aimed to integrate the deep imbalanced regression (DIR) technique into ResNet and Vision Transformer models to predict refractive error from retinal photographs.</div></div><div><h3>Design</h3><div>Retrospective study.</div></div><div><h3>Subjects</h3><div>We developed the DL models using up to 103 865 images from the Singapore Epidemiology of Eye Diseases Study and the United Kingdom Biobank, with internal testing on up to 8067 images. External testing was conducted on 7043 images from the Singapore Prospective Study and 5539 images from the Beijing Eye Study. Retinal images and corresponding refractive error data were extracted.</div></div><div><h3>Methods</h3><div>This retrospective study developed regression-based models, including ResNet34 with DIR, and SwinV2 (Swin Transformer) with DIR, incorporating Label Distribution Smoothing and Feature Distribution Smoothing. These models were compared against their baseline versions, ResNet34 and SwinV2, in predicting spherical and spherical equivalent (SE) power.</div></div><div><h3>Main Outcome Measures</h3><div>Mean absolute error (MAE) and coefficient of determination were used to evaluate the models’ performances. The Wilcoxon signed-rank test was performed to assess statistical significance between DIR-integrated models and their baseline versions.</div></div><div><h3>Results</h3><div>For prediction of the spherical power, ResNet34 with DIR (MAE: 0.84D) and SwinV2 with DIR (MAE: 0.77D) significantly outperformed their baseline—ResNet34 (MAE: 0.88D; <em>P</em> &lt; 0.001) and SwinV2 (MAE: 0.87D; <em>P</em> &lt; 0.001) in internal test. For prediction of the SE power, ResNet34 with DIR (MAE: 0.78D) and SwinV2 with DIR (MAE: 0.75D) consistently significantly outperformed its baseline—ResNet34 (MAE: 0.81D; <em>P</em> &lt; 0.001) and SwinV2 (MAE: 0.78D; <em>P</em> &lt; 0.05) in internal test. Similar trends were observed in external test sets for both spherical and SE power prediction.</div></div><div><h3>Conclusions</h3><div>Deep imbalanced regressed–integrated DL models showed potential in addressing data imbalances and improving the prediction of refractive error. These findings highlight the potential utility of combining DL models with retinal imaging for opportunistic screening of refractive errors, particularly in settings where retinal cameras are already in use.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100659"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143169380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementing a Common Data Model in Ophthalmology: Mapping Structured Electronic Health Record Ophthalmic Examination Data to Standard Vocabularies
IF 3.2
Ophthalmology science Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100666
Justin C. Quon MD , Christopher P. Long MD , William Halfpenny MBBS, MEng , Amy Chuang MS , Cindy X. Cai MD, MS , Sally L. Baxter MD, MSc , Vamsi Daketi MS , Amanda Schmitz BS , Neil Bahroos MS , Benjamin Y. Xu MD, PhD , Brian C. Toy MD
{"title":"Implementing a Common Data Model in Ophthalmology: Mapping Structured Electronic Health Record Ophthalmic Examination Data to Standard Vocabularies","authors":"Justin C. Quon MD ,&nbsp;Christopher P. Long MD ,&nbsp;William Halfpenny MBBS, MEng ,&nbsp;Amy Chuang MS ,&nbsp;Cindy X. Cai MD, MS ,&nbsp;Sally L. Baxter MD, MSc ,&nbsp;Vamsi Daketi MS ,&nbsp;Amanda Schmitz BS ,&nbsp;Neil Bahroos MS ,&nbsp;Benjamin Y. Xu MD, PhD ,&nbsp;Brian C. Toy MD","doi":"10.1016/j.xops.2024.100666","DOIUrl":"10.1016/j.xops.2024.100666","url":null,"abstract":"<div><h3>Objective</h3><div>To identify and characterize concept coverage gaps of ophthalmology examination data elements within the Cerner Millennium electronic health record (EHR) implementations by the Observational Health Data Sciences and Informatics Observational Medical Outcomes Partnership (OMOP) common data model (CDM).</div></div><div><h3>Design</h3><div>Analysis of data elements in EHRs.</div></div><div><h3>Subjects</h3><div>Not applicable.</div></div><div><h3>Methods</h3><div>Source eye examination data elements from the default Cerner Model Experience EHR and a local implementation of the Cerner Millennium EHR were extracted, classified into one of 8 subject categories, and mapped to the semantically closest standard concept in the OMOP CDM. Mappings were categorized as exact, if the data element and OMOP concept represented equivalent information, wider, if the OMOP concept was missing conceptual granularity, narrower, if the OMOP concept introduced excess information, and unmatched, if no standard concept adequately represented the data element. Descriptive statistics and qualitative analysis were used to describe the concept coverage for each subject category.</div></div><div><h3>Main Outcome Measures</h3><div>Concept coverage gaps in 8 ophthalmology subject categories of data elements by the OMOP CDM.</div></div><div><h3>Results</h3><div>There were 409 and 947 ophthalmology data elements in the default and local Cerner modules, respectively. Of the 409 mappings in the default Cerner module, 25% (n = 102) were exact, 53% (n = 217) were wider, 3% (n = 11) were narrower, and 19% (n = 79) were unmatched. In the local Cerner module, 18% (n = 173) of mappings were exact, 54% (n = 514) were wider, 1% (n = 10) were narrower, and 26% (n = 250) were <em>unmatched</em>. The largest coverage gaps were seen in the local Cerner module under the visual acuity, sensorimotor testing, and refraction categories, with 95%, 95%, and 81% of data elements in each respective category having mappings that were not exact. Concept coverage gaps spanned all 8 categories in both EHR implementations.</div></div><div><h3>Conclusions</h3><div>Considerable coverage gaps by the OMOP CDM exist in all areas of the ophthalmology examination, which should be addressed to improve the OMOP CDM’s effectiveness in ophthalmic research. We identify specific subject categories that may benefit from increased granularity in the OMOP CDM and provide suggestions for facilitating consistency of standard concepts, with the goal of improving data standards in ophthalmology.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100666"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11783105/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143082438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Topographic Relationships and Geographic Distribution of Prevascular Vitreous Fissures and Cisterns Assessed by Ultrawidefield En Face Vitreous Images
IF 3.2
Ophthalmology science Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100660
Fei Deng MD, Mengying Tao MD, Yanjie Zhu MD, Xiaoyu Xu MD, Yue Wu MD, Lisha Li, Ying Lin MD, PhD, Yan Luo MD, PhD
{"title":"The Topographic Relationships and Geographic Distribution of Prevascular Vitreous Fissures and Cisterns Assessed by Ultrawidefield En Face Vitreous Images","authors":"Fei Deng MD,&nbsp;Mengying Tao MD,&nbsp;Yanjie Zhu MD,&nbsp;Xiaoyu Xu MD,&nbsp;Yue Wu MD,&nbsp;Lisha Li,&nbsp;Ying Lin MD, PhD,&nbsp;Yan Luo MD, PhD","doi":"10.1016/j.xops.2024.100660","DOIUrl":"10.1016/j.xops.2024.100660","url":null,"abstract":"<div><h3>Purpose</h3><div>To determine the topographic relationships and geographic distribution of prevascular vitreous fissures (PVFs) and cisterns across the entire posterior vitreous membrane in healthy subjects, using ultrawidefield en face and cross-sectional swept-source OCT (SS-OCT) images.</div></div><div><h3>Design</h3><div>Observational cross-sectional study.</div></div><div><h3>Participants</h3><div>Ninety-six eyes of 96 healthy participants (age range, 20–49 years) without posterior vitreous detachment.</div></div><div><h3>Methods</h3><div>For each eye, a 29 × 24-mm SS-OCT volume scan was obtained, along with standardized horizontal and vertical scans through the fovea.</div></div><div><h3>Main Outcome Measures</h3><div>Ultrawidefield en face and cross-sectional images were analyzed to assess the topographic relationships and geographic distribution of PVFs and cisterns in the posterior vitreous.</div></div><div><h3>Results</h3><div>En face imaging readily distinguished various preretinal liquefaction spaces throughout the posterior vitreous, extending to near the equator. Aside from the posterior precortical vitreous pocket (PPVP) and the area of Martegiani, all preretinal liquefied fissures and cisterns were distributed along superficial retinal vessels, suggesting they originated from prevascular vitreous liquefaction. In 96 eyes of healthy young and middle-aged adults, PVFs were identified in all participants, presenting a continuous course. Cisterns were detected in 79 eyes (82.3%) and were distributed as follows: superotemporal (91.1%), infratemporal (63.3%), supranasal (41.8%), and inferonasal (22.8%), respectively. The superotemporal cistern was most frequently observed (<em>P</em> &lt; 0.001), and cisterns were more likely to involve multiple quadrants with age (<em>P</em> = 0.005). Additionally, all preretinal liquefaction spaces, including the PPVP, PVFs, and cisterns, were consistently located overlying the vitreoretinal tightly adhered regions.</div></div><div><h3>Conclusions</h3><div>Ultrawidefield en face vitreous imaging in healthy young and middle-aged adults revealed that (1) PVFs distributed along superficial retinal vessels with continuous course; (2) cisterns may develop from PVFs and are more common in the superotemporal quadrant; (3) cisterns appear early in life and become more widespread with age; (4) preretinal vitreous liquefaction follows a stereotypic pattern, aligning along regions of firm vitreoretinal adhesion.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100660"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143168917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Quantification of Retinopathy of Prematurity Stage via Ultrawidefield OCT
IF 3.2
Ophthalmology science Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100663
Spencer S. Burt BA , Aaron S. Coyner PhD , Elizabeth V. Roti BS , Yakub Bayhaqi PhD , John Jackson MD , Mani K. Woodward MS , Shuibin Ni PhD , Susan R. Ostmo MS , Guangru Liang BS , Yali Jia PhD , David Huang MD , Michael F. Chiang MD , Benjamin K. Young MD , Yifan Jian PhD , John Peter Campbell MD
{"title":"Automated Quantification of Retinopathy of Prematurity Stage via Ultrawidefield OCT","authors":"Spencer S. Burt BA ,&nbsp;Aaron S. Coyner PhD ,&nbsp;Elizabeth V. Roti BS ,&nbsp;Yakub Bayhaqi PhD ,&nbsp;John Jackson MD ,&nbsp;Mani K. Woodward MS ,&nbsp;Shuibin Ni PhD ,&nbsp;Susan R. Ostmo MS ,&nbsp;Guangru Liang BS ,&nbsp;Yali Jia PhD ,&nbsp;David Huang MD ,&nbsp;Michael F. Chiang MD ,&nbsp;Benjamin K. Young MD ,&nbsp;Yifan Jian PhD ,&nbsp;John Peter Campbell MD","doi":"10.1016/j.xops.2024.100663","DOIUrl":"10.1016/j.xops.2024.100663","url":null,"abstract":"<div><h3>Purpose</h3><div>Retinopathy of prematurity (ROP) stage is defined by the visual appearance of the vascular-avascular border, which reflects a spectrum of pathologic neurovascular tissue (NVT). Previous work demonstrated that the thickness of the ridge lesion, measured using OCT, corresponds to higher clinical diagnosis of stage. This study evaluates whether the volume of anomalous NVT (ANVTV), defined as abnormal tissue protruding from the regular contour of the retina, can be measured automatically using deep learning to develop quantitative OCT-based biomarkers in ROP.</div></div><div><h3>Design</h3><div>Single-center retrospective case series.</div></div><div><h3>Participants</h3><div>Thirty-three infants with ROP in the Oregon Health &amp; Science University neonatal intensive care unit.</div></div><div><h3>Methods</h3><div>OCT B-scans were collected using an investigational ultrawidefield OCT. The ANVTV was manually segmented. A set of 3347 B-scans and corresponding manual segmentations from 12 volumes from 6 patients were used to train an automated segmentation tool using a U-Net. An additional held-out test data set of 60 B-scans from 6 infants was used to evaluate model performance. The Dice–Sorensen coefficient (DSC) comparing manual and automated segmentation of ANVTV was calculated. Scans from 21 additional infants were used for clinical evaluation of ANVTV using the visit in which they had developed their peak stage of ROP. Each infant had every B-scan in a volume automatically segmented for ANVTV (total number of segmented voxels within the 60° temporal to the optic disc). The ANVTV was compared between infants with stage 1 to 3 ROP using a Kruskal–Wallis test and tracked over time in all infants with stage 3 ROP.</div></div><div><h3>Main Outcome Measurements</h3><div>Cross sectional and longitudinal association between ANVTV and stages 1 to 3 ROP.</div></div><div><h3>Results</h3><div>Comparing automated and manual segmentation of ANVTV achieved a DSC of 0.61 ± 0.13. Using the U-Net, ANVTV was associated with higher disease stage both cross sectionally and longitudinally. Median ANVTV significantly increased as ROP stage worsened from 1 (0, [interquartile range: 0–0] kilovoxels) to 2 (170.1 [interquartile range: 104.2–183.6] kilovoxels) to 3 (421.4 [interquartile range: 312.3–1110.8] kilovoxels; <em>P</em> &lt; 0.001).</div></div><div><h3>Conclusions</h3><div>Automated OCT-based measurement of ANVTV was associated with clinical disease stage in ROP, both cross sectionally and longitudinally. Ultrawidefield-OCT may facilitate more objective screening, diagnosis, and monitoring in the future.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100663"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11760822/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信