IF 3.2 Q1 OPHTHALMOLOGY
Amr Elsawy PhD , Tiarnan D.L. Keenan PhD, MD , Alisa T. Thavikulwat MD , Amy Lu MD , Sunil Bellur MD , Souvick Mukherjee PhD , Elvira Agron MS , Qingyu Chen PhD , Emily Y. Chew MD , Zhiyong Lu PhD
{"title":"Deep-Reticular Pseudodrusen-Net: A 3-Dimensional Deep Network for Detection of Reticular Pseudodrusen on OCT Scans","authors":"Amr Elsawy PhD ,&nbsp;Tiarnan D.L. Keenan PhD, MD ,&nbsp;Alisa T. Thavikulwat MD ,&nbsp;Amy Lu MD ,&nbsp;Sunil Bellur MD ,&nbsp;Souvick Mukherjee PhD ,&nbsp;Elvira Agron MS ,&nbsp;Qingyu Chen PhD ,&nbsp;Emily Y. Chew MD ,&nbsp;Zhiyong Lu PhD","doi":"10.1016/j.xops.2024.100655","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>To propose Deep-RPD-Net, a 3-dimensional deep learning network with semisupervised learning (SSL) for the detection of reticular pseudodrusen (RPD) on spectral-domain OCT scans, explain its decision-making, and compare it with baseline methods.</div></div><div><h3>Design</h3><div>Deep learning model development.</div></div><div><h3>Participants</h3><div>Three hundred fifteen participants from the Age-Related Eye Disease Study 2 Ancillary OCT Study (AREDS2) and 161 participants from the Dark Adaptation in Age-related Macular Degeneration Study (DAAMD).</div></div><div><h3>Methods</h3><div>Two datasets comprising of 1304 (826 labeled) and 1479 (1366 labeled) OCT scans were used to develop and evaluate Deep-RPD-Net and baseline models. The AREDS2 RPD labels were transferred from fundus autofluorescence images, which were captured at the same visit for OCT scans, and DAAMD RPD labels were obtained from the Wisconsin reading center. The datasets were divided into 70%, 10%, and 20% at the participant level for training, validation, and test sets, respectively, for the baseline model. Then, SSL was used with the unlabeled OCT scans to improve the trained model. The performance of Deep-RPD-Net was compared to that of 3 retina specialists on a subset of 50 OCT scans for each dataset. En face and B-scan heatmaps of all networks were visualized and graded on 25 OCT scans with positive labels, using a scale of 1 to 4, to explore the models' decision-making.</div></div><div><h3>Main Outcome Measures</h3><div>Accuracy and area under the receiver-operating characteristic curve (AUROC).</div></div><div><h3>Results</h3><div>Deep-RPD-Net achieved the highest performance metrics, with accuracy and AUROC of 0.81 (95% confidence interval [CI]: 0.76–0.87) and 0.91 (95% CI: 0.86–0.95) on the AREDS2 dataset and 0.80 (95% CI: 0.75–0.84) and 0.86 (95% CI: 0.79–0.91) on the DAAMD dataset. On the subjective test, it achieved accuracy of 0.84 compared with 0.76 for the most accurate retina specialist on the AREDS2 dataset and 0.82 compared with 0.84 on the DAAMD dataset. It also achieved the highest visualization grades, of 3.26 and 3.32 for en face and B-scan heatmaps, respectively.</div></div><div><h3>Conclusions</h3><div>Deep-RPD-Net was able to detect RPD accurately from OCT scans. The visualizations of Deep-RPD-Net were the most explainable to the retina specialist with the highest accuracy. The code and pretrained models are publicly available at https://github.com/ncbi-nlp/Deep-RPD-Net.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100655"},"PeriodicalIF":3.2000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758204/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmology science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266691452400191X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的提出一种采用半监督学习(SSL)的三维深度学习网络Deep-RPD-Net,用于在光谱域OCT扫描中检测网状假皱襞(RPD),解释其决策过程,并将其与基线方法进行比较:设计:深度学习模型开发:年龄相关性眼病研究 2 辅助 OCT 研究(AREDS2)的 315 名参与者和年龄相关性黄斑变性暗适应研究(DAAMD)的 161 名参与者:两个数据集包括 1304 次(826 个标记)和 1479 次(1366 个标记)OCT 扫描,用于开发和评估 Deep-RPD-Net 模型和基线模型。AREDS2 RPD 标签是从眼底自动荧光图像中转移过来的,而 DAAMD RPD 标签是从威斯康星州阅读中心获得的。基线模型的训练集、验证集和测试集在参与者层面分别分为 70%、10% 和 20%。然后,使用 SSL 和未标记的 OCT 扫描来改进训练模型。在每个数据集的 50 个 OCT 扫描子集中,将 Deep-RPD-Net 的性能与 3 位视网膜专家的性能进行了比较。对所有网络的 En face 和 B-scan 热图进行了可视化,并对 25 个带有正向标签的 OCT 扫描进行了分级,采用 1 到 4 的评分标准,以探索模型的决策性:结果:Deep-RPD-Net 的性能指标最高,在 AREDS2 数据集上的准确率和 AUROC 分别为 0.81(95% 置信区间 [CI]:0.76-0.87)和 0.91(95% CI:0.86-0.95),在 DAAMD 数据集上的准确率和 AUROC 分别为 0.80(95% CI:0.75-0.84)和 0.86(95% CI:0.79-0.91)。在主观测试中,它的准确度为 0.84,而 AREDS2 数据集上最准确的视网膜专家的准确度为 0.76,DAAMD 数据集上的准确度为 0.82,DAAMD 数据集上的准确度为 0.84。它还获得了最高的可视化等级,en face 和 B-scan 热图分别为 3.26 和 3.32:Deep-RPD-Net 能够从 OCT 扫描中准确检测出 RPD。结论:Deep-RPD-Net 能够从 OCT 扫描中准确地检测出 RPD,Deep-RPD-Net 的可视化效果最容易为视网膜专家所理解,准确率也最高。代码和预训练模型可在 https://github.com/ncbi-nlp/Deep-RPD-Net.Financial 披露:专有或商业信息披露见本文末尾的脚注和披露。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep-Reticular Pseudodrusen-Net: A 3-Dimensional Deep Network for Detection of Reticular Pseudodrusen on OCT Scans

Objective

To propose Deep-RPD-Net, a 3-dimensional deep learning network with semisupervised learning (SSL) for the detection of reticular pseudodrusen (RPD) on spectral-domain OCT scans, explain its decision-making, and compare it with baseline methods.

Design

Deep learning model development.

Participants

Three hundred fifteen participants from the Age-Related Eye Disease Study 2 Ancillary OCT Study (AREDS2) and 161 participants from the Dark Adaptation in Age-related Macular Degeneration Study (DAAMD).

Methods

Two datasets comprising of 1304 (826 labeled) and 1479 (1366 labeled) OCT scans were used to develop and evaluate Deep-RPD-Net and baseline models. The AREDS2 RPD labels were transferred from fundus autofluorescence images, which were captured at the same visit for OCT scans, and DAAMD RPD labels were obtained from the Wisconsin reading center. The datasets were divided into 70%, 10%, and 20% at the participant level for training, validation, and test sets, respectively, for the baseline model. Then, SSL was used with the unlabeled OCT scans to improve the trained model. The performance of Deep-RPD-Net was compared to that of 3 retina specialists on a subset of 50 OCT scans for each dataset. En face and B-scan heatmaps of all networks were visualized and graded on 25 OCT scans with positive labels, using a scale of 1 to 4, to explore the models' decision-making.

Main Outcome Measures

Accuracy and area under the receiver-operating characteristic curve (AUROC).

Results

Deep-RPD-Net achieved the highest performance metrics, with accuracy and AUROC of 0.81 (95% confidence interval [CI]: 0.76–0.87) and 0.91 (95% CI: 0.86–0.95) on the AREDS2 dataset and 0.80 (95% CI: 0.75–0.84) and 0.86 (95% CI: 0.79–0.91) on the DAAMD dataset. On the subjective test, it achieved accuracy of 0.84 compared with 0.76 for the most accurate retina specialist on the AREDS2 dataset and 0.82 compared with 0.84 on the DAAMD dataset. It also achieved the highest visualization grades, of 3.26 and 3.32 for en face and B-scan heatmaps, respectively.

Conclusions

Deep-RPD-Net was able to detect RPD accurately from OCT scans. The visualizations of Deep-RPD-Net were the most explainable to the retina specialist with the highest accuracy. The code and pretrained models are publicly available at https://github.com/ncbi-nlp/Deep-RPD-Net.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ophthalmology science
Ophthalmology science Ophthalmology
CiteScore
3.40
自引率
0.00%
发文量
0
审稿时长
89 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信