Robust lumen segmentation based on temporal residual U-Net using spatiotemporal features in intravascular optical coherence tomography images.

IF 2.9 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS
Mingrui He, Yin Yu, Kun Liu, Rongyang Zhu, Qingrui Li, Yanjia Wang, Shanshan Zhou, Hao Kuang, Junfeng Jiang, Tiegen Liu, Zhenyang Ding
{"title":"Robust lumen segmentation based on temporal residual U-Net using spatiotemporal features in intravascular optical coherence tomography images.","authors":"Mingrui He, Yin Yu, Kun Liu, Rongyang Zhu, Qingrui Li, Yanjia Wang, Shanshan Zhou, Hao Kuang, Junfeng Jiang, Tiegen Liu, Zhenyang Ding","doi":"10.1117/1.JBO.30.10.106003","DOIUrl":null,"url":null,"abstract":"<p><strong>Significance: </strong>Lumen segmentation in intravascular optical coherence tomography (IVOCT) images is essential for quantifying vascular stenosis severity, location, and length. Current methods relying on manual parameter tuning or single-frame spatial features struggle with image artifacts, limiting clinical utility.</p><p><strong>Aim: </strong>We aim to develop a temporal residual U-Net (TR-Unet) leveraging spatiotemporal feature fusion for robust IVOCT lumen segmentation, particularly in artifact-corrupted images.</p><p><strong>Approach: </strong>We integrate convolutional long short-term memory networks to capture vascular morphology evolution across pullback sequences, enhanced ResUnet for spatial feature extraction, and coordinate attention mechanisms for adaptive spatiotemporal fusion.</p><p><strong>Results: </strong>By processing 2451 clinical images, the proposed TR-Unet model shows a well performance as Dice coefficient = 98.54%, Jaccard similarity (JS) = 97.17%, and recall = 98.26%. Evaluations on severely blood artifact-corrupted images reveal improvements of 3.01% (Dice), 1.3% (ACC), 5.24% (JS), 2.15% (recall), and 2.06% (precision) over competing methods.</p><p><strong>Conclusions: </strong>TR-Unet establishes a robust and effective spatiotemporal fusion paradigm for IVOCT segmentation, demonstrating significant robustness to artifacts and providing architectural insights for temporal modeling optimization.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"30 10","pages":"106003"},"PeriodicalIF":2.9000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12498255/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Biomedical Optics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JBO.30.10.106003","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Significance: Lumen segmentation in intravascular optical coherence tomography (IVOCT) images is essential for quantifying vascular stenosis severity, location, and length. Current methods relying on manual parameter tuning or single-frame spatial features struggle with image artifacts, limiting clinical utility.

Aim: We aim to develop a temporal residual U-Net (TR-Unet) leveraging spatiotemporal feature fusion for robust IVOCT lumen segmentation, particularly in artifact-corrupted images.

Approach: We integrate convolutional long short-term memory networks to capture vascular morphology evolution across pullback sequences, enhanced ResUnet for spatial feature extraction, and coordinate attention mechanisms for adaptive spatiotemporal fusion.

Results: By processing 2451 clinical images, the proposed TR-Unet model shows a well performance as Dice coefficient = 98.54%, Jaccard similarity (JS) = 97.17%, and recall = 98.26%. Evaluations on severely blood artifact-corrupted images reveal improvements of 3.01% (Dice), 1.3% (ACC), 5.24% (JS), 2.15% (recall), and 2.06% (precision) over competing methods.

Conclusions: TR-Unet establishes a robust and effective spatiotemporal fusion paradigm for IVOCT segmentation, demonstrating significant robustness to artifacts and providing architectural insights for temporal modeling optimization.

基于时间残差U-Net的血管内光学相干断层成像时空特征鲁棒腔分割。
意义:血管内光学相干断层扫描(IVOCT)图像中的管腔分割对于量化血管狭窄的严重程度、位置和长度至关重要。目前的方法依赖于手动参数调整或单帧空间特征与图像伪影斗争,限制了临床应用。目的:我们的目标是开发一种利用时空特征融合的时间残差U-Net (TR-Unet),用于稳健的IVOCT管腔分割,特别是在伪像损坏的图像中。方法:我们整合了卷积长短期记忆网络来捕捉血管形态在回拉序列中的演变,增强的ResUnet用于空间特征提取,协调注意机制用于自适应时空融合。结果:通过对2451张临床图像进行处理,所提出的TR-Unet模型的Dice系数为98.54%,Jaccard相似度(JS)为97.17%,召回率为98.26%。对严重血液伪像损坏的图像的评估显示,与竞争方法相比,其改进幅度分别为3.01% (Dice)、1.3% (ACC)、5.24% (JS)、2.15% (recall)和2.06% (precision)。结论:TR-Unet为IVOCT分割建立了一个鲁棒且有效的时空融合范式,展示了对工件的显著鲁棒性,并为时间建模优化提供了架构见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.40
自引率
5.70%
发文量
263
审稿时长
2 months
期刊介绍: The Journal of Biomedical Optics publishes peer-reviewed papers on the use of modern optical technology for improved health care and biomedical research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信