Sli2Vol+: Segmenting 3D Medical Images Based on an Object Estimation Guided Correspondence Flow Network.

Delin An, Pengfei Gu, Milan Sonka, Chaoli Wang, Danny Z Chen
{"title":"Sli2Vol+: Segmenting 3D Medical Images Based on an Object Estimation Guided Correspondence Flow Network.","authors":"Delin An, Pengfei Gu, Milan Sonka, Chaoli Wang, Danny Z Chen","doi":"10.1109/wacv61041.2025.00357","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning (DL) methods have shown remarkable successes in medical image segmentation, often using large amounts of annotated data for model training. However, acquiring a large number of diverse labeled 3D medical image datasets is highly difficult and expensive. Recently, mask propagation DL methods were developed to reduce the annotation burden on 3D medical images. For example, Sli2Vol [59] proposed a self-supervised framework (SSF) to learn correspondences by matching neighboring slices via slice reconstruction in the training stage; the learned correspondences were then used to propagate a labeled slice to other slices in the test stage. But, these methods are still prone to error accumulation due to the inter-slice propagation of reconstruction errors. Also, they do not handle discontinuities well, which can occur between consecutive slices in 3D images, as they emphasize exploiting object continuity. To address these challenges, in this work, we propose a new SSF, called <b>Sli2Vol+</b>, for segmenting any anatomical structures in 3D medical images using only a single annotated slice per training and testing volume. Specifically, in the training stage, we first propagate an annotated 2D slice of a training volume to the other slices, generating pseudo-labels (PLs). Then, we develop a novel Object Estimation Guided Correspondence Flow Network to learn reliable correspondences between consecutive slices and corresponding PLs in a self-supervised manner. In the test stage, such correspondences are utilized to propagate a single annotated slice to the other slices of a test volume. We demonstrate the effectiveness of our method on various medical image segmentation tasks with different datasets, showing better generalizability across different organs, modalities, and modals. Code is available at https://github.com/adlsn/Sli2VolPlus.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2025 ","pages":"3624-3634"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12459605/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/wacv61041.2025.00357","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/8 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning (DL) methods have shown remarkable successes in medical image segmentation, often using large amounts of annotated data for model training. However, acquiring a large number of diverse labeled 3D medical image datasets is highly difficult and expensive. Recently, mask propagation DL methods were developed to reduce the annotation burden on 3D medical images. For example, Sli2Vol [59] proposed a self-supervised framework (SSF) to learn correspondences by matching neighboring slices via slice reconstruction in the training stage; the learned correspondences were then used to propagate a labeled slice to other slices in the test stage. But, these methods are still prone to error accumulation due to the inter-slice propagation of reconstruction errors. Also, they do not handle discontinuities well, which can occur between consecutive slices in 3D images, as they emphasize exploiting object continuity. To address these challenges, in this work, we propose a new SSF, called Sli2Vol+, for segmenting any anatomical structures in 3D medical images using only a single annotated slice per training and testing volume. Specifically, in the training stage, we first propagate an annotated 2D slice of a training volume to the other slices, generating pseudo-labels (PLs). Then, we develop a novel Object Estimation Guided Correspondence Flow Network to learn reliable correspondences between consecutive slices and corresponding PLs in a self-supervised manner. In the test stage, such correspondences are utilized to propagate a single annotated slice to the other slices of a test volume. We demonstrate the effectiveness of our method on various medical image segmentation tasks with different datasets, showing better generalizability across different organs, modalities, and modals. Code is available at https://github.com/adlsn/Sli2VolPlus.

Sli2Vol+:基于目标估计导向的对应流网络分割3D医学图像。
深度学习(DL)方法在医学图像分割方面取得了显著的成功,通常使用大量带注释的数据进行模型训练。然而,获取大量不同的标记三维医学图像数据集是非常困难和昂贵的。为了减轻三维医学图像的标注负担,近年来发展了掩模传播深度学习方法。例如,Sli2Vol[59]提出了一种自监督框架(self-supervised framework, SSF),在训练阶段通过切片重建来匹配相邻切片,从而学习对应关系;然后使用学习到的对应关系将标记的切片传播到测试阶段的其他切片。但是,由于重建误差在片间传播,这些方法仍然容易产生误差积累。此外,它们不能很好地处理不连续性,这可能发生在3D图像的连续切片之间,因为它们强调利用对象的连续性。为了解决这些挑战,在这项工作中,我们提出了一种新的SSF,称为Sli2Vol+,用于分割3D医学图像中的任何解剖结构,每个训练和测试体积仅使用单个带注释的切片。具体来说,在训练阶段,我们首先将训练卷的一个带注释的二维切片传播到其他切片,生成伪标签(PLs)。然后,我们开发了一种新的目标估计引导对应流网络,以自监督的方式学习连续切片和相应PLs之间的可靠对应关系。在测试阶段,这样的对应被用来将单个带注释的片传播到测试卷的其他片。我们证明了我们的方法在不同数据集的各种医学图像分割任务上的有效性,在不同的器官、模式和模态上表现出更好的泛化性。代码可从https://github.com/adlsn/Sli2VolPlus获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信