FPL-UDA: Filtered Pseudo Label-Based Unsupervised Cross-Modality Adaptation for Vestibular Schwannoma Segmentation

Jianghao Wu, Ran Gu, Guiming Dong, Guotai Wang, Shaoting Zhang
{"title":"FPL-UDA: Filtered Pseudo Label-Based Unsupervised Cross-Modality Adaptation for Vestibular Schwannoma Segmentation","authors":"Jianghao Wu, Ran Gu, Guiming Dong, Guotai Wang, Shaoting Zhang","doi":"10.1109/ISBI52829.2022.9761706","DOIUrl":null,"url":null,"abstract":"Automatic segmentation of Vestibular Schwannoma (VS) from Magnetic Resonance Imaging (MRI) will help patient management and improve clinical workflow. This paper aims to adapt a model trained with annotated ceT1 images to segment VS from hrT2 images, without annotations of the latter. The proposed method is named as Filtered Pseudo Label-based Unsupervised Domain Adaptation (FPL-UDA) and consists of three components: 1) an image translator converting hrT2 images to pseudo ceT1 images, where a two-stage translation strategy is proposed to deal with images with VS in various sizes, 2) a pseudo label generator trained with ceT1 images to provide pseudo labels for the pseudo ceT1 images, where a GAN-based data augmentation method is proposed to deal with the domain gap between them, and 3) a final segmentor trained with hrT2 images and the corresponding pseudo labels, where an uncertainty-based filtering is used to select high-quality pseudo labels to improve the segmentor’s robustness. Experimental results with a public VS dataset showed that our method achieved an average Dice of 81.52% for VS segmentation from hrT2 images, which outperformed existing unsupervised cross-modality adaptation methods.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"58 5","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBI52829.2022.9761706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Automatic segmentation of Vestibular Schwannoma (VS) from Magnetic Resonance Imaging (MRI) will help patient management and improve clinical workflow. This paper aims to adapt a model trained with annotated ceT1 images to segment VS from hrT2 images, without annotations of the latter. The proposed method is named as Filtered Pseudo Label-based Unsupervised Domain Adaptation (FPL-UDA) and consists of three components: 1) an image translator converting hrT2 images to pseudo ceT1 images, where a two-stage translation strategy is proposed to deal with images with VS in various sizes, 2) a pseudo label generator trained with ceT1 images to provide pseudo labels for the pseudo ceT1 images, where a GAN-based data augmentation method is proposed to deal with the domain gap between them, and 3) a final segmentor trained with hrT2 images and the corresponding pseudo labels, where an uncertainty-based filtering is used to select high-quality pseudo labels to improve the segmentor’s robustness. Experimental results with a public VS dataset showed that our method achieved an average Dice of 81.52% for VS segmentation from hrT2 images, which outperformed existing unsupervised cross-modality adaptation methods.
FPL-UDA:基于过滤伪标签的无监督交叉模态自适应前庭神经鞘瘤分割
前庭神经鞘瘤(VS)的磁共振成像自动分割将有助于患者管理和改善临床工作流程。本文的目的是利用带注释的ceT1图像训练的模型从hrT2图像中分割VS,而不需要hrT2图像的注释。该方法被命名为基于过滤伪标签的无监督域自适应(FPL-UDA),由三个部分组成:1)将hrT2图像转换为伪ceT1图像,其中提出了一种两阶段翻译策略来处理不同大小的VS图像;2)用ceT1图像训练的伪标签生成器为伪ceT1图像提供伪标签,其中提出了一种基于gan的数据增强方法来处理它们之间的域间隙;3)用hrT2图像和相应的伪标签训练的最终分割器。其中,采用基于不确定性的滤波来选择高质量的伪标签,以提高分割器的鲁棒性。在公共VS数据集上的实验结果表明,我们的方法对hrT2图像进行VS分割的平均Dice为81.52%,优于现有的无监督跨模态自适应方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信