Glaucoma detection in myopic eyes using deep learning autoencoder-based regions of interest.

IF 0.9
Frontiers in ophthalmology Pub Date : 2025-08-04 eCollection Date: 2025-01-01 DOI:10.3389/fopht.2025.1624015
Christopher Bowd, Akram Belghith, Mark Christopher, Makoto Araie, Aiko Iwase, Goji Tomita, Kyoko Ohno-Matsui, Hitomi Saito, Hiroshi Murata, Tsutomu Kikawa, Kazuhisa Sugiyama, Tomomi Higashide, Atsuya Miki, Toru Nakazawa, Makoto Aihara, Tae-Woo Kim, Christopher Kai Shun Leung, Robert N Weinreb, Linda M Zangwill
{"title":"Glaucoma detection in myopic eyes using deep learning autoencoder-based regions of interest.","authors":"Christopher Bowd, Akram Belghith, Mark Christopher, Makoto Araie, Aiko Iwase, Goji Tomita, Kyoko Ohno-Matsui, Hitomi Saito, Hiroshi Murata, Tsutomu Kikawa, Kazuhisa Sugiyama, Tomomi Higashide, Atsuya Miki, Toru Nakazawa, Makoto Aihara, Tae-Woo Kim, Christopher Kai Shun Leung, Robert N Weinreb, Linda M Zangwill","doi":"10.3389/fopht.2025.1624015","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate the diagnostic accuracy of a deep learning autoencoder-based model utilizing regions of interest (ROI) from optical coherence tomography (OCT) texture enface images for detecting glaucoma in myopic eyes.</p><p><strong>Methods: </strong>This cross-sectional study included a total of 453 eyes from 315 participants from the multi-center \"Swept-Source OCT (SS-OCT) Myopia and Glaucoma Study\", composed of 268 eyes from 168 healthy individuals and 185 eyes from 147 glaucomatous individuals. All participants underwent swept-source optical coherence tomography (SS-OCT) imaging, from which texture enface images were constructed and analyzed. The study compared four methods: (1) global RNFL thickness, (2) texture enface image, (3) a single autoencoder model trained only on healthy eyes, and (4) a dual autoencoder model trained on both healthy and glaucomatous eyes. Diagnostic accuracy was assessed using the area under the receiver operating curves (AUROC) and precision recall curves (AUPRC).</p><p><strong>Results: </strong>The dual autoencoder model achieved the highest AUROC (95% CI) (0.92 [0.88, 0.95]), significantly outperforming the single autoencoder model trained only on healthy eyes (0.86 [0.83, 0.88], p = 0.01), the global RNFL thickness model (0.84 [0.80, 0.86], p = 0.003), and the texture enface model (0.83 [0.79, 0.85], p = 0.005). Using AUPRC (95% CI), the dual autoencoder model (0.86 [0.83, 0.89]) also outperformed the single autoencoder model trained only on healthy eyes (0.80 [0.78, 0.82], p = 0.02), the global RNFL thickness model (0.74 [0.70, 0.76], p = 0.001), and the texture enface model (0.71 [0.68, 0.73], p<0.001). No significant difference was observed between the global RNFL thickness measurement and the texture enface measurement (p = 0.47).</p><p><strong>Discussion: </strong>The dual autoencoder model, which integrates reconstruction errors from both healthy and glaucomatous training data, demonstrated superior diagnostic accuracy compared to the single autoencoder model, global RNFL thickness and texture enface-based approaches. These findings suggest that deep learning models leveraging ROI-based reconstruction error from texture enface images may enhance glaucoma classification in myopic eyes, providing a robust alternative to conventional structural thickness metrics.</p>","PeriodicalId":73096,"journal":{"name":"Frontiers in ophthalmology","volume":"5 ","pages":"1624015"},"PeriodicalIF":0.9000,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12358265/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in ophthalmology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fopht.2025.1624015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: To evaluate the diagnostic accuracy of a deep learning autoencoder-based model utilizing regions of interest (ROI) from optical coherence tomography (OCT) texture enface images for detecting glaucoma in myopic eyes.

Methods: This cross-sectional study included a total of 453 eyes from 315 participants from the multi-center "Swept-Source OCT (SS-OCT) Myopia and Glaucoma Study", composed of 268 eyes from 168 healthy individuals and 185 eyes from 147 glaucomatous individuals. All participants underwent swept-source optical coherence tomography (SS-OCT) imaging, from which texture enface images were constructed and analyzed. The study compared four methods: (1) global RNFL thickness, (2) texture enface image, (3) a single autoencoder model trained only on healthy eyes, and (4) a dual autoencoder model trained on both healthy and glaucomatous eyes. Diagnostic accuracy was assessed using the area under the receiver operating curves (AUROC) and precision recall curves (AUPRC).

Results: The dual autoencoder model achieved the highest AUROC (95% CI) (0.92 [0.88, 0.95]), significantly outperforming the single autoencoder model trained only on healthy eyes (0.86 [0.83, 0.88], p = 0.01), the global RNFL thickness model (0.84 [0.80, 0.86], p = 0.003), and the texture enface model (0.83 [0.79, 0.85], p = 0.005). Using AUPRC (95% CI), the dual autoencoder model (0.86 [0.83, 0.89]) also outperformed the single autoencoder model trained only on healthy eyes (0.80 [0.78, 0.82], p = 0.02), the global RNFL thickness model (0.74 [0.70, 0.76], p = 0.001), and the texture enface model (0.71 [0.68, 0.73], p<0.001). No significant difference was observed between the global RNFL thickness measurement and the texture enface measurement (p = 0.47).

Discussion: The dual autoencoder model, which integrates reconstruction errors from both healthy and glaucomatous training data, demonstrated superior diagnostic accuracy compared to the single autoencoder model, global RNFL thickness and texture enface-based approaches. These findings suggest that deep learning models leveraging ROI-based reconstruction error from texture enface images may enhance glaucoma classification in myopic eyes, providing a robust alternative to conventional structural thickness metrics.

Abstract Image

基于深度学习自编码器的感兴趣区域的近视青光眼检测。
目的:评价利用光学相干断层扫描(OCT)纹理面图像感兴趣区域(ROI)的深度学习自编码器模型诊断近视眼青光眼的准确性。方法:本横断面研究纳入来自多中心“扫源OCT (SS-OCT)近视与青光眼研究”的315名参与者的453只眼,其中168名健康个体的268只眼和147名青光眼患者的185只眼。所有参与者都接受了扫描源光学相干断层扫描(SS-OCT)成像,并从中构建和分析纹理表面图像。研究比较了四种方法:(1)全局RNFL厚度,(2)纹理面图像,(3)只训练健康眼的单自编码器模型,(4)同时训练健康眼和青光眼的双自编码器模型。采用受试者操作曲线下面积(AUROC)和精确召回曲线(AUPRC)评估诊断准确性。结果:双自编码器模型获得了最高的AUROC (95% CI)(0.92[0.88, 0.95]),显著优于仅在健康眼睛上训练的单自编码器模型(0.86 [0.83,0.88],p = 0.01)、全局RNFL厚度模型(0.84 [0.80,0.86],p = 0.003)和纹理面模型(0.83 [0.79,0.85],p = 0.005)。使用AUPRC (95% CI),双自编码器模型(0.86[0.83,0.89])也优于仅对健康眼睛进行训练的单自编码器模型(0.80 [0.78,0.82],p = 0.02)、全局RNFL厚度模型(0.74 [0.70,0.76],p = 0.001)和纹理面模型(0.71 [0.68,0.73]),p。双自编码器模型集成了健康和青光眼训练数据的重建误差,与单自编码器模型、全局RNFL厚度和基于纹理表面的方法相比,显示出更高的诊断准确性。这些发现表明,利用基于roi的纹理面图像重建误差的深度学习模型可以增强近视眼睛的青光眼分类,为传统的结构厚度指标提供了一个强大的替代方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
0.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信