不确定性引导下视网膜OCT图像分割的交叉融合网络

IF 3.2 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Medical physics Pub Date : 2025-09-01 DOI:10.1002/mp.18102
Jiaxin Wang, Weifang Zhu, Dehui Xiang, Xinjian Chen, Tao Peng, Qing Peng, Meng Wang, Fei Shi
{"title":"不确定性引导下视网膜OCT图像分割的交叉融合网络","authors":"Jiaxin Wang,&nbsp;Weifang Zhu,&nbsp;Dehui Xiang,&nbsp;Xinjian Chen,&nbsp;Tao Peng,&nbsp;Qing Peng,&nbsp;Meng Wang,&nbsp;Fei Shi","doi":"10.1002/mp.18102","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Deep learning-based segmentation methods for optical coherence tomography (OCT) have demonstrated outstanding performance. However, the stochastic distribution of training data and the inherent limitations of deep neural networks introduce uncertainty into the segmentation process. Accurately estimating this uncertainty is essential for generating reliable confidence assessments and improving model predictions.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>To address these challenges, we propose a novel uncertainty-guided cross-layer fusion network (UGCFNet) for retinal OCT segmentation. UGCFNet integrates uncertainty quantification into the training process of deep neural networks and leverages this uncertainty to enhance segmentation accuracy.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Our model employs an encoder–decoder architecture that quantitatively assesses uncertainty at multiple stages, directing the network's focus toward regions with higher uncertainty. By facilitating cross-layer feature fusion, UGCFNet enhances the comprehensive understanding of both semantic information and morphological details. Additionally, we incorporate an improved Bayesian neural network loss function alongside an uncertainty-aware loss function, enabling the network to effectively utilize these mechanisms for better uncertainty modeling.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>We conducted extensive experiments on the publicly available AI-Challenger and OIMHS OCT segmentation datasets. The training, validation, and testing sets of the AI-Challenger dataset are comprised of 32, 8, and 43 OCT volumes, yielding a total of 4096, 1024, and 5504 B-scans, respectively. The training, validation, and testing sets of the OIMHS dataset consist of 100, 25, and 25 OCT volumes, resulting in 2,310, 798, and 751 B-scans, respectively. The results demonstrate that UGCFNet achieves state-of-the-art performance, with average Dice similarity coefficients of 79.47% and 93.22% on the respective datasets.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>Our proposed UGCFNet significantly advances retinal OCT segmentation by integrating uncertainty guidance and cross-level feature fusion, offering more reliable and accurate segmentation outcomes.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 9","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Uncertainty-guided cross-level fusion network for retinal OCT image segmentation\",\"authors\":\"Jiaxin Wang,&nbsp;Weifang Zhu,&nbsp;Dehui Xiang,&nbsp;Xinjian Chen,&nbsp;Tao Peng,&nbsp;Qing Peng,&nbsp;Meng Wang,&nbsp;Fei Shi\",\"doi\":\"10.1002/mp.18102\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Deep learning-based segmentation methods for optical coherence tomography (OCT) have demonstrated outstanding performance. However, the stochastic distribution of training data and the inherent limitations of deep neural networks introduce uncertainty into the segmentation process. Accurately estimating this uncertainty is essential for generating reliable confidence assessments and improving model predictions.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>To address these challenges, we propose a novel uncertainty-guided cross-layer fusion network (UGCFNet) for retinal OCT segmentation. UGCFNet integrates uncertainty quantification into the training process of deep neural networks and leverages this uncertainty to enhance segmentation accuracy.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>Our model employs an encoder–decoder architecture that quantitatively assesses uncertainty at multiple stages, directing the network's focus toward regions with higher uncertainty. By facilitating cross-layer feature fusion, UGCFNet enhances the comprehensive understanding of both semantic information and morphological details. Additionally, we incorporate an improved Bayesian neural network loss function alongside an uncertainty-aware loss function, enabling the network to effectively utilize these mechanisms for better uncertainty modeling.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>We conducted extensive experiments on the publicly available AI-Challenger and OIMHS OCT segmentation datasets. The training, validation, and testing sets of the AI-Challenger dataset are comprised of 32, 8, and 43 OCT volumes, yielding a total of 4096, 1024, and 5504 B-scans, respectively. The training, validation, and testing sets of the OIMHS dataset consist of 100, 25, and 25 OCT volumes, resulting in 2,310, 798, and 751 B-scans, respectively. The results demonstrate that UGCFNet achieves state-of-the-art performance, with average Dice similarity coefficients of 79.47% and 93.22% on the respective datasets.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>Our proposed UGCFNet significantly advances retinal OCT segmentation by integrating uncertainty guidance and cross-level feature fusion, offering more reliable and accurate segmentation outcomes.</p>\\n </section>\\n </div>\",\"PeriodicalId\":18384,\"journal\":{\"name\":\"Medical physics\",\"volume\":\"52 9\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical physics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.18102\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://aapm.onlinelibrary.wiley.com/doi/10.1002/mp.18102","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

基于深度学习的光学相干层析成像(OCT)分割方法表现出了优异的性能。然而,训练数据的随机分布和深度神经网络固有的局限性给分割过程带来了不确定性。准确估计这种不确定性对于产生可靠的置信度评估和改进模型预测至关重要。为了解决这些问题,我们提出了一种新的不确定性引导跨层融合网络(UGCFNet)用于视网膜OCT分割。UGCFNet将不确定性量化整合到深度神经网络的训练过程中,并利用这种不确定性来提高分割精度。我们的模型采用编码器-解码器架构,定量评估多个阶段的不确定性,将网络的焦点指向不确定性较高的区域。通过促进跨层特征融合,UGCFNet增强了对语义信息和形态细节的全面理解。此外,我们将改进的贝叶斯神经网络损失函数与不确定性感知损失函数结合在一起,使网络能够有效地利用这些机制进行更好的不确定性建模。结果我们在公开的AI-Challenger和OIMHS OCT分割数据集上进行了大量的实验。AI-Challenger数据集的训练集、验证集和测试集由32、8和43个OCT卷组成,分别产生4096、1024和5504个b扫描。OIMHS数据集的训练集、验证集和测试集由100、25和25个OCT卷组成,分别产生2,310、798和751次b扫描。结果表明,UGCFNet达到了最先进的性能,在各自的数据集上的平均Dice相似系数分别为79.47%和93.22%。结论我们提出的UGCFNet结合了不确定性引导和跨水平特征融合,显著推进了视网膜OCT分割,提供了更可靠、准确的分割结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Uncertainty-guided cross-level fusion network for retinal OCT image segmentation

Uncertainty-guided cross-level fusion network for retinal OCT image segmentation

Uncertainty-guided cross-level fusion network for retinal OCT image segmentation

Background

Deep learning-based segmentation methods for optical coherence tomography (OCT) have demonstrated outstanding performance. However, the stochastic distribution of training data and the inherent limitations of deep neural networks introduce uncertainty into the segmentation process. Accurately estimating this uncertainty is essential for generating reliable confidence assessments and improving model predictions.

Purpose

To address these challenges, we propose a novel uncertainty-guided cross-layer fusion network (UGCFNet) for retinal OCT segmentation. UGCFNet integrates uncertainty quantification into the training process of deep neural networks and leverages this uncertainty to enhance segmentation accuracy.

Methods

Our model employs an encoder–decoder architecture that quantitatively assesses uncertainty at multiple stages, directing the network's focus toward regions with higher uncertainty. By facilitating cross-layer feature fusion, UGCFNet enhances the comprehensive understanding of both semantic information and morphological details. Additionally, we incorporate an improved Bayesian neural network loss function alongside an uncertainty-aware loss function, enabling the network to effectively utilize these mechanisms for better uncertainty modeling.

Results

We conducted extensive experiments on the publicly available AI-Challenger and OIMHS OCT segmentation datasets. The training, validation, and testing sets of the AI-Challenger dataset are comprised of 32, 8, and 43 OCT volumes, yielding a total of 4096, 1024, and 5504 B-scans, respectively. The training, validation, and testing sets of the OIMHS dataset consist of 100, 25, and 25 OCT volumes, resulting in 2,310, 798, and 751 B-scans, respectively. The results demonstrate that UGCFNet achieves state-of-the-art performance, with average Dice similarity coefficients of 79.47% and 93.22% on the respective datasets.

Conclusion

Our proposed UGCFNet significantly advances retinal OCT segmentation by integrating uncertainty guidance and cross-level feature fusion, offering more reliable and accurate segmentation outcomes.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Medical physics
Medical physics 医学-核医学
CiteScore
6.80
自引率
15.80%
发文量
660
审稿时长
1.7 months
期刊介绍: Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信