Translation Consistent Semi-Supervised Segmentation for 3D Medical Images

Yuyuan Liu;Yu Tian;Chong Wang;Yuanhong Chen;Fengbei Liu;Vasileios Belagiannis;Gustavo Carneiro
{"title":"Translation Consistent Semi-Supervised Segmentation for 3D Medical Images","authors":"Yuyuan Liu;Yu Tian;Chong Wang;Yuanhong Chen;Fengbei Liu;Vasileios Belagiannis;Gustavo Carneiro","doi":"10.1109/TMI.2024.3468896","DOIUrl":null,"url":null,"abstract":"3D medical image segmentation methods have been successful, but their dependence on large amounts of voxel-level annotated data is a disadvantage that needs to be addressed given the high cost to obtain such annotation. Semi-supervised learning (SSL) solves this issue by training models with a large unlabelled and a small labelled dataset. The most successful SSL approaches are based on consistency learning that minimises the distance between model responses obtained from perturbed views of the unlabelled data. These perturbations usually keep the spatial input context between views fairly consistent, which may cause the model to learn segmentation patterns from the spatial input contexts instead of the foreground objects. In this paper, we introduce the <underline>Tra</u>nslation <underline>Co</u>nsistent <underline>Co</u>-training (TraCoCo) which is a consistency learning SSL method that perturbs the input data views by varying their spatial input context, allowing the model to learn segmentation patterns from foreground objects. Furthermore, we propose a new Confident Regional Cross entropy (CRC) loss, which improves training convergence and keeps the robustness to co-training pseudo-labelling mistakes. Our method yields state-of-the-art (SOTA) results for several 3D data benchmarks, such as the Left Atrium (LA), Pancreas-CT (Pancreas), and Brain Tumor Segmentation (BraTS19). Our method also attains best results on a 2D-slice benchmark, namely the Automated Cardiac Diagnosis Challenge (ACDC), further demonstrating its effectiveness. Our code, training logs and checkpoints are available at <uri>https://github.com/yyliu01/TraCoCo</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"952-968"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10695462/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

3D medical image segmentation methods have been successful, but their dependence on large amounts of voxel-level annotated data is a disadvantage that needs to be addressed given the high cost to obtain such annotation. Semi-supervised learning (SSL) solves this issue by training models with a large unlabelled and a small labelled dataset. The most successful SSL approaches are based on consistency learning that minimises the distance between model responses obtained from perturbed views of the unlabelled data. These perturbations usually keep the spatial input context between views fairly consistent, which may cause the model to learn segmentation patterns from the spatial input contexts instead of the foreground objects. In this paper, we introduce the Translation Consistent Co-training (TraCoCo) which is a consistency learning SSL method that perturbs the input data views by varying their spatial input context, allowing the model to learn segmentation patterns from foreground objects. Furthermore, we propose a new Confident Regional Cross entropy (CRC) loss, which improves training convergence and keeps the robustness to co-training pseudo-labelling mistakes. Our method yields state-of-the-art (SOTA) results for several 3D data benchmarks, such as the Left Atrium (LA), Pancreas-CT (Pancreas), and Brain Tumor Segmentation (BraTS19). Our method also attains best results on a 2D-slice benchmark, namely the Automated Cardiac Diagnosis Challenge (ACDC), further demonstrating its effectiveness. Our code, training logs and checkpoints are available at https://github.com/yyliu01/TraCoCo.
三维医学图像的翻译一致性半监督分割
三维医学图像分割方法已经取得了成功,但由于获得这种注释的成本很高,因此需要解决对大量体素级注释数据的依赖。半监督学习(SSL)通过使用大型未标记数据集和小型标记数据集训练模型来解决这个问题。最成功的SSL方法是基于一致性学习,它最小化了从未标记数据的受干扰视图获得的模型响应之间的距离。这些扰动通常使视图之间的空间输入上下文保持相当一致,这可能导致模型从空间输入上下文而不是前景对象中学习分割模式。在本文中,我们引入了翻译一致性协同训练(TraCoCo),这是一种一致性学习SSL方法,它通过改变输入数据视图的空间输入上下文来干扰输入数据视图,使模型能够从前景对象中学习分割模式。此外,我们提出了一种新的可信区域交叉熵(CRC)损失,提高了训练收敛性,并保持了对共同训练伪标记错误的鲁棒性。我们的方法为几个3D数据基准(如左心房(LA)、胰腺ct(胰腺)和脑肿瘤分割(BraTS19))提供了最先进的(SOTA)结果。我们的方法在2d切片基准上也取得了最佳结果,即自动心脏诊断挑战(ACDC),进一步证明了其有效性。我们的代码、培训日志和检查点可在https://github.com/yyliu01/TraCoCo上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信