{"title":"利用眼底摄影和光学相干断层成像的双峰成像,深度学习对多种视网膜疾病的诊断性能和推广。","authors":"Xingwang Gu, Yang Zhou, Jianchun Zhao, Hongzhe Zhang, Xinlei Pan, Bing Li, Bilei Zhang, Yuelin Wang, Song Xia, Hailan Lin, Jie Wang, Dayong Ding, Xirong Li, Shan Wu, Jingyuan Yang, Youxin Chen","doi":"10.3389/fcell.2025.1665173","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To develop and evaluate deep learning (DL) models for detecting multiple retinal diseases using bimodal imaging of color fundus photography (CFP) and optical coherence tomography (OCT), assessing diagnostic performance and generalizability.</p><p><strong>Methods: </strong>This cross-sectional study utilized 1445 CFP-OCT pairs from 1,029 patients across three hospitals. Five bimodal models developed, and the model with best performance (Fusion-MIL) was tested and compared with CFP-MIL and OCT-MIL. Models were trained on 710 pairs (Maestro device), validated on 241, and tested on 255 (dataset 1). Additional tests used different devices and scanning patterns: 88 pairs (dataset 2, DRI-OCT), 91 (dataset 3, DRI-OCT), 60 (dataset 4, Visucam/VG200 OCT). Seven retinal conditions, including normal, diabetic retinopathy, dry and wet age-related macular degeneration, pathologic myopia (PM), epiretinal membran, and macular edema, were assessed. PM ATN (atrophy, traction, neovascularization) classification was trained and tested on another 1,184 pairs. Area under receiver operating characteristic curve (AUC) was calculated to evaluated the performance.</p><p><strong>Results: </strong>Fusion-MIL achieved mean AUC 0.985 (95% CI 0.971-0.999) in dataset 2, outperforming CFP-MIL (0.876, <i>P</i> < 0.001) and OCT-MIL (0.982, <i>P</i> = 0.337), as well as in dataset 3 (0.978 vs. 0.913, <i>P</i> < 0.001 and 0.962, <i>P</i> = 0.025) and dataset 4 (0.962 vs. 0.962, <i>P</i> < 0.001 and 0.962, <i>P</i> = 0.079). Fusion-MIL also achieved superior accuracy. In ATN classification, AUC ranges 0.902-0.997 for atrophy, 0.869-0.982 for traction, and 0.742-0.976 for neovascularization.</p><p><strong>Conclusion: </strong>Bimodal Fusion-MIL improved diagnosis over single-modal models, showing strong generalizability across devices and detailed grading ability, valuable for various scenarios.</p>","PeriodicalId":12448,"journal":{"name":"Frontiers in Cell and Developmental Biology","volume":"13 ","pages":"1665173"},"PeriodicalIF":4.6000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12460420/pdf/","citationCount":"0","resultStr":"{\"title\":\"Diagnostic performance and generalizability of deep learning for multiple retinal diseases using bimodal imaging of fundus photography and optical coherence tomography.\",\"authors\":\"Xingwang Gu, Yang Zhou, Jianchun Zhao, Hongzhe Zhang, Xinlei Pan, Bing Li, Bilei Zhang, Yuelin Wang, Song Xia, Hailan Lin, Jie Wang, Dayong Ding, Xirong Li, Shan Wu, Jingyuan Yang, Youxin Chen\",\"doi\":\"10.3389/fcell.2025.1665173\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>To develop and evaluate deep learning (DL) models for detecting multiple retinal diseases using bimodal imaging of color fundus photography (CFP) and optical coherence tomography (OCT), assessing diagnostic performance and generalizability.</p><p><strong>Methods: </strong>This cross-sectional study utilized 1445 CFP-OCT pairs from 1,029 patients across three hospitals. Five bimodal models developed, and the model with best performance (Fusion-MIL) was tested and compared with CFP-MIL and OCT-MIL. Models were trained on 710 pairs (Maestro device), validated on 241, and tested on 255 (dataset 1). Additional tests used different devices and scanning patterns: 88 pairs (dataset 2, DRI-OCT), 91 (dataset 3, DRI-OCT), 60 (dataset 4, Visucam/VG200 OCT). Seven retinal conditions, including normal, diabetic retinopathy, dry and wet age-related macular degeneration, pathologic myopia (PM), epiretinal membran, and macular edema, were assessed. PM ATN (atrophy, traction, neovascularization) classification was trained and tested on another 1,184 pairs. Area under receiver operating characteristic curve (AUC) was calculated to evaluated the performance.</p><p><strong>Results: </strong>Fusion-MIL achieved mean AUC 0.985 (95% CI 0.971-0.999) in dataset 2, outperforming CFP-MIL (0.876, <i>P</i> < 0.001) and OCT-MIL (0.982, <i>P</i> = 0.337), as well as in dataset 3 (0.978 vs. 0.913, <i>P</i> < 0.001 and 0.962, <i>P</i> = 0.025) and dataset 4 (0.962 vs. 0.962, <i>P</i> < 0.001 and 0.962, <i>P</i> = 0.079). Fusion-MIL also achieved superior accuracy. In ATN classification, AUC ranges 0.902-0.997 for atrophy, 0.869-0.982 for traction, and 0.742-0.976 for neovascularization.</p><p><strong>Conclusion: </strong>Bimodal Fusion-MIL improved diagnosis over single-modal models, showing strong generalizability across devices and detailed grading ability, valuable for various scenarios.</p>\",\"PeriodicalId\":12448,\"journal\":{\"name\":\"Frontiers in Cell and Developmental Biology\",\"volume\":\"13 \",\"pages\":\"1665173\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12460420/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Cell and Developmental Biology\",\"FirstCategoryId\":\"99\",\"ListUrlMain\":\"https://doi.org/10.3389/fcell.2025.1665173\",\"RegionNum\":2,\"RegionCategory\":\"生物学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"CELL BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Cell and Developmental Biology","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.3389/fcell.2025.1665173","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"CELL BIOLOGY","Score":null,"Total":0}
引用次数: 0
摘要
目的:开发和评估利用彩色眼底摄影(CFP)和光学相干断层扫描(OCT)双峰成像检测多种视网膜疾病的深度学习(DL)模型,评估诊断性能和普遍性。方法:本横断面研究利用来自三家医院1,029名患者的1445对CFP-OCT对。建立了5个双峰模型,并与CFP-MIL和OCT-MIL进行了比较,测试了性能最好的模型Fusion-MIL。模型在710对(Maestro设备)上进行训练,在241对上进行验证,并在255对(数据集1)上进行测试。其他测试使用不同的设备和扫描模式:88对(数据集2,DRI-OCT), 91对(数据集3,DRI-OCT), 60对(数据集4,Visucam/VG200 OCT)。评估了7种视网膜状况,包括正常、糖尿病视网膜病变、干性和湿性年龄相关性黄斑变性、病理性近视(PM)、视网膜前膜和黄斑水肿。PM ATN(萎缩,牵引,新生血管)分类训练和测试了另外1184对。计算受者工作特征曲线下面积(AUC)来评价其性能。结果:融合- mil在数据集2中的平均AUC为0.985 (95% CI 0.971-0.999),优于CFP-MIL (0.876, P < 0.001)和OCT-MIL (0.982, P = 0.337),以及数据集3 (0.978 vs. 0.913, P < 0.001和0.962,P = 0.025)和数据集4 (0.962 vs. 0.962, P < 0.001和0.962,P = 0.079)。融合- mil也取得了卓越的精度。在ATN分类中,萎缩的AUC范围为0.902-0.997,牵引的AUC范围为0.869-0.982,新生血管的AUC范围为0.742-0.976。结论:与单模态模型相比,双模态Fusion-MIL的诊断效果更好,具有较强的跨设备通用性和详细的分级能力,对各种场景都有价值。
Diagnostic performance and generalizability of deep learning for multiple retinal diseases using bimodal imaging of fundus photography and optical coherence tomography.
Purpose: To develop and evaluate deep learning (DL) models for detecting multiple retinal diseases using bimodal imaging of color fundus photography (CFP) and optical coherence tomography (OCT), assessing diagnostic performance and generalizability.
Methods: This cross-sectional study utilized 1445 CFP-OCT pairs from 1,029 patients across three hospitals. Five bimodal models developed, and the model with best performance (Fusion-MIL) was tested and compared with CFP-MIL and OCT-MIL. Models were trained on 710 pairs (Maestro device), validated on 241, and tested on 255 (dataset 1). Additional tests used different devices and scanning patterns: 88 pairs (dataset 2, DRI-OCT), 91 (dataset 3, DRI-OCT), 60 (dataset 4, Visucam/VG200 OCT). Seven retinal conditions, including normal, diabetic retinopathy, dry and wet age-related macular degeneration, pathologic myopia (PM), epiretinal membran, and macular edema, were assessed. PM ATN (atrophy, traction, neovascularization) classification was trained and tested on another 1,184 pairs. Area under receiver operating characteristic curve (AUC) was calculated to evaluated the performance.
Results: Fusion-MIL achieved mean AUC 0.985 (95% CI 0.971-0.999) in dataset 2, outperforming CFP-MIL (0.876, P < 0.001) and OCT-MIL (0.982, P = 0.337), as well as in dataset 3 (0.978 vs. 0.913, P < 0.001 and 0.962, P = 0.025) and dataset 4 (0.962 vs. 0.962, P < 0.001 and 0.962, P = 0.079). Fusion-MIL also achieved superior accuracy. In ATN classification, AUC ranges 0.902-0.997 for atrophy, 0.869-0.982 for traction, and 0.742-0.976 for neovascularization.
Conclusion: Bimodal Fusion-MIL improved diagnosis over single-modal models, showing strong generalizability across devices and detailed grading ability, valuable for various scenarios.
期刊介绍:
Frontiers in Cell and Developmental Biology is a broad-scope, interdisciplinary open-access journal, focusing on the fundamental processes of life, led by Prof Amanda Fisher and supported by a geographically diverse, high-quality editorial board.
The journal welcomes submissions on a wide spectrum of cell and developmental biology, covering intracellular and extracellular dynamics, with sections focusing on signaling, adhesion, migration, cell death and survival and membrane trafficking. Additionally, the journal offers sections dedicated to the cutting edge of fundamental and translational research in molecular medicine and stem cell biology.
With a collaborative, rigorous and transparent peer-review, the journal produces the highest scientific quality in both fundamental and applied research, and advanced article level metrics measure the real-time impact and influence of each publication.