{"title":"Robust lumen segmentation based on temporal residual U-Net using spatiotemporal features in intravascular optical coherence tomography images.","authors":"Mingrui He, Yin Yu, Kun Liu, Rongyang Zhu, Qingrui Li, Yanjia Wang, Shanshan Zhou, Hao Kuang, Junfeng Jiang, Tiegen Liu, Zhenyang Ding","doi":"10.1117/1.JBO.30.10.106003","DOIUrl":null,"url":null,"abstract":"<p><strong>Significance: </strong>Lumen segmentation in intravascular optical coherence tomography (IVOCT) images is essential for quantifying vascular stenosis severity, location, and length. Current methods relying on manual parameter tuning or single-frame spatial features struggle with image artifacts, limiting clinical utility.</p><p><strong>Aim: </strong>We aim to develop a temporal residual U-Net (TR-Unet) leveraging spatiotemporal feature fusion for robust IVOCT lumen segmentation, particularly in artifact-corrupted images.</p><p><strong>Approach: </strong>We integrate convolutional long short-term memory networks to capture vascular morphology evolution across pullback sequences, enhanced ResUnet for spatial feature extraction, and coordinate attention mechanisms for adaptive spatiotemporal fusion.</p><p><strong>Results: </strong>By processing 2451 clinical images, the proposed TR-Unet model shows a well performance as Dice coefficient = 98.54%, Jaccard similarity (JS) = 97.17%, and recall = 98.26%. Evaluations on severely blood artifact-corrupted images reveal improvements of 3.01% (Dice), 1.3% (ACC), 5.24% (JS), 2.15% (recall), and 2.06% (precision) over competing methods.</p><p><strong>Conclusions: </strong>TR-Unet establishes a robust and effective spatiotemporal fusion paradigm for IVOCT segmentation, demonstrating significant robustness to artifacts and providing architectural insights for temporal modeling optimization.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"30 10","pages":"106003"},"PeriodicalIF":2.9000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12498255/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Biomedical Optics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JBO.30.10.106003","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Significance: Lumen segmentation in intravascular optical coherence tomography (IVOCT) images is essential for quantifying vascular stenosis severity, location, and length. Current methods relying on manual parameter tuning or single-frame spatial features struggle with image artifacts, limiting clinical utility.
Aim: We aim to develop a temporal residual U-Net (TR-Unet) leveraging spatiotemporal feature fusion for robust IVOCT lumen segmentation, particularly in artifact-corrupted images.
Approach: We integrate convolutional long short-term memory networks to capture vascular morphology evolution across pullback sequences, enhanced ResUnet for spatial feature extraction, and coordinate attention mechanisms for adaptive spatiotemporal fusion.
Results: By processing 2451 clinical images, the proposed TR-Unet model shows a well performance as Dice coefficient = 98.54%, Jaccard similarity (JS) = 97.17%, and recall = 98.26%. Evaluations on severely blood artifact-corrupted images reveal improvements of 3.01% (Dice), 1.3% (ACC), 5.24% (JS), 2.15% (recall), and 2.06% (precision) over competing methods.
Conclusions: TR-Unet establishes a robust and effective spatiotemporal fusion paradigm for IVOCT segmentation, demonstrating significant robustness to artifacts and providing architectural insights for temporal modeling optimization.
期刊介绍:
The Journal of Biomedical Optics publishes peer-reviewed papers on the use of modern optical technology for improved health care and biomedical research.