Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos

IF 3.2 Q1 OPHTHALMOLOGY
Samantha Min Er Yew BSc , Xiaofeng Lei MSc , Yibing Chen BEng , Jocelyn Hui Lin Goh BEng , Krithi Pushpanathan MSc , Can Can Xue MD, PhD , Ya Xing Wang MD, PhD , Jost B. Jonas MD, PhD , Charumathi Sabanayagam MD, PhD , Victor Teck Chang Koh MBBS, MMed , Xinxing Xu PhD , Yong Liu PhD , Ching-Yu Cheng MD, PhD , Yih-Chung Tham PhD
{"title":"Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos","authors":"Samantha Min Er Yew BSc ,&nbsp;Xiaofeng Lei MSc ,&nbsp;Yibing Chen BEng ,&nbsp;Jocelyn Hui Lin Goh BEng ,&nbsp;Krithi Pushpanathan MSc ,&nbsp;Can Can Xue MD, PhD ,&nbsp;Ya Xing Wang MD, PhD ,&nbsp;Jost B. Jonas MD, PhD ,&nbsp;Charumathi Sabanayagam MD, PhD ,&nbsp;Victor Teck Chang Koh MBBS, MMed ,&nbsp;Xinxing Xu PhD ,&nbsp;Yong Liu PhD ,&nbsp;Ching-Yu Cheng MD, PhD ,&nbsp;Yih-Chung Tham PhD","doi":"10.1016/j.xops.2024.100659","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><div>Recent studies utilized ocular images and deep learning (DL) to predict refractive error and yielded notable results. However, most studies did not address biases from imbalanced datasets or conduct external validations. To address these gaps, this study aimed to integrate the deep imbalanced regression (DIR) technique into ResNet and Vision Transformer models to predict refractive error from retinal photographs.</div></div><div><h3>Design</h3><div>Retrospective study.</div></div><div><h3>Subjects</h3><div>We developed the DL models using up to 103 865 images from the Singapore Epidemiology of Eye Diseases Study and the United Kingdom Biobank, with internal testing on up to 8067 images. External testing was conducted on 7043 images from the Singapore Prospective Study and 5539 images from the Beijing Eye Study. Retinal images and corresponding refractive error data were extracted.</div></div><div><h3>Methods</h3><div>This retrospective study developed regression-based models, including ResNet34 with DIR, and SwinV2 (Swin Transformer) with DIR, incorporating Label Distribution Smoothing and Feature Distribution Smoothing. These models were compared against their baseline versions, ResNet34 and SwinV2, in predicting spherical and spherical equivalent (SE) power.</div></div><div><h3>Main Outcome Measures</h3><div>Mean absolute error (MAE) and coefficient of determination were used to evaluate the models’ performances. The Wilcoxon signed-rank test was performed to assess statistical significance between DIR-integrated models and their baseline versions.</div></div><div><h3>Results</h3><div>For prediction of the spherical power, ResNet34 with DIR (MAE: 0.84D) and SwinV2 with DIR (MAE: 0.77D) significantly outperformed their baseline—ResNet34 (MAE: 0.88D; <em>P</em> &lt; 0.001) and SwinV2 (MAE: 0.87D; <em>P</em> &lt; 0.001) in internal test. For prediction of the SE power, ResNet34 with DIR (MAE: 0.78D) and SwinV2 with DIR (MAE: 0.75D) consistently significantly outperformed its baseline—ResNet34 (MAE: 0.81D; <em>P</em> &lt; 0.001) and SwinV2 (MAE: 0.78D; <em>P</em> &lt; 0.05) in internal test. Similar trends were observed in external test sets for both spherical and SE power prediction.</div></div><div><h3>Conclusions</h3><div>Deep imbalanced regressed–integrated DL models showed potential in addressing data imbalances and improving the prediction of refractive error. These findings highlight the potential utility of combining DL models with retinal imaging for opportunistic screening of refractive errors, particularly in settings where retinal cameras are already in use.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100659"},"PeriodicalIF":3.2000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmology science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666914524001957","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

Recent studies utilized ocular images and deep learning (DL) to predict refractive error and yielded notable results. However, most studies did not address biases from imbalanced datasets or conduct external validations. To address these gaps, this study aimed to integrate the deep imbalanced regression (DIR) technique into ResNet and Vision Transformer models to predict refractive error from retinal photographs.

Design

Retrospective study.

Subjects

We developed the DL models using up to 103 865 images from the Singapore Epidemiology of Eye Diseases Study and the United Kingdom Biobank, with internal testing on up to 8067 images. External testing was conducted on 7043 images from the Singapore Prospective Study and 5539 images from the Beijing Eye Study. Retinal images and corresponding refractive error data were extracted.

Methods

This retrospective study developed regression-based models, including ResNet34 with DIR, and SwinV2 (Swin Transformer) with DIR, incorporating Label Distribution Smoothing and Feature Distribution Smoothing. These models were compared against their baseline versions, ResNet34 and SwinV2, in predicting spherical and spherical equivalent (SE) power.

Main Outcome Measures

Mean absolute error (MAE) and coefficient of determination were used to evaluate the models’ performances. The Wilcoxon signed-rank test was performed to assess statistical significance between DIR-integrated models and their baseline versions.

Results

For prediction of the spherical power, ResNet34 with DIR (MAE: 0.84D) and SwinV2 with DIR (MAE: 0.77D) significantly outperformed their baseline—ResNet34 (MAE: 0.88D; P < 0.001) and SwinV2 (MAE: 0.87D; P < 0.001) in internal test. For prediction of the SE power, ResNet34 with DIR (MAE: 0.78D) and SwinV2 with DIR (MAE: 0.75D) consistently significantly outperformed its baseline—ResNet34 (MAE: 0.81D; P < 0.001) and SwinV2 (MAE: 0.78D; P < 0.05) in internal test. Similar trends were observed in external test sets for both spherical and SE power prediction.

Conclusions

Deep imbalanced regressed–integrated DL models showed potential in addressing data imbalances and improving the prediction of refractive error. These findings highlight the potential utility of combining DL models with retinal imaging for opportunistic screening of refractive errors, particularly in settings where retinal cameras are already in use.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
视网膜照片屈光不正预测的深度不平衡回归模型
最近的研究利用眼部图像和深度学习(DL)来预测屈光不正,并取得了显著的结果。然而,大多数研究没有解决来自不平衡数据集的偏差或进行外部验证。为了解决这些问题,本研究旨在将深度不平衡回归(DIR)技术整合到ResNet和Vision Transformer模型中,以预测视网膜照片的屈光不正。DesignRetrospective研究。我们使用来自新加坡眼病流行病学研究和英国生物银行的多达103 865张图像开发DL模型,并对多达8067张图像进行内部测试。对来自新加坡前瞻性研究的7043张图像和来自北京眼科研究的5539张图像进行外部测试。提取视网膜图像和相应的屈光不正数据。方法采用标签分布平滑法和特征分布平滑法,建立了基于回归的模型,包括ResNet34和SwinV2 (Swin Transformer)。将这些模型与基线版本ResNet34和SwinV2进行比较,以预测球形和球形等效(SE)功率。主要观察指标采用平均绝对误差(MAE)和决定系数来评价模型的性能。采用Wilcoxon符号秩检验来评估dir集成模型与其基线版本之间的统计学显著性。结果在预测球功率方面,带DIR的ResNet34 (MAE: 0.84D)和带DIR的SwinV2 (MAE: 0.77D)显著优于其基线ResNet34 (MAE: 0.88D;P & lt;0.001)和SwinV2 (MAE: 0.87D;P & lt;0.001)。对于SE功率的预测,带DIR的ResNet34 (MAE: 0.78D)和带DIR的SwinV2 (MAE: 0.75D)持续显著优于其基线- ResNet34 (MAE: 0.81D;P & lt;0.001)和SwinV2 (MAE: 0.78D;P & lt;0.05)。在球形和SE功率预测的外部测试集中也观察到类似的趋势。结论深度不平衡回归积分深度深度模型在解决数据不平衡问题和改善屈光不正预测方面具有一定的潜力。这些发现强调了将DL模型与视网膜成像结合起来进行屈光不正机会筛查的潜在效用,特别是在已经使用视网膜相机的环境中。财务披露专有或商业披露可在本文末尾的脚注和披露中找到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Ophthalmology science
Ophthalmology science Ophthalmology
CiteScore
3.40
自引率
0.00%
发文量
0
审稿时长
89 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信