A segment anything model-guided and match-based semi-supervised segmentation framework for medical imaging.

Medical physics Pub Date : 2025-03-29 DOI:10.1002/mp.17785
Guoping Xu, Xiaoxue Qian, Hua-Chieh Shao, Jax Luo, Weiguo Lu, You Zhang
{"title":"A segment anything model-guided and match-based semi-supervised segmentation framework for medical imaging.","authors":"Guoping Xu, Xiaoxue Qian, Hua-Chieh Shao, Jax Luo, Weiguo Lu, You Zhang","doi":"10.1002/mp.17785","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Semi-supervised segmentation leverages sparse annotation information to learn rich representations from combined labeled and label-less data for segmentation tasks. The Match-based framework, by using the consistency constraint of segmentation results from different models/augmented label-less inputs, is found effective in semi-supervised learning. This approach, however, is challenged by the low quality of pseudo-labels generated as intermediate products for training the network, due to the lack of the ''ground-truth'' reference.</p><p><strong>Purpose: </strong>This study aims to leverage the foundation model, segment anything model (SAM), to assist unsupervised learning of Match-based frameworks. Trained with an extremely large dataset, SAM-based methods generalize better than traditional models to various imaging domains, allow it to serve as an assistant to Match-based frameworks to improve the quality of intermediate pseudo-labels for semi-supervised learning.</p><p><strong>Methods: </strong>We propose SAM-Match, a SAM-guided and Match-based framework for semi-supervised medical image segmentation. Our approach involves two main steps: First, we use pretrained Match-based models to extract high-confidence predictions for prompt generation. Second, these prompts and unlabeled images are input into a fine-tuned SAM-based method to produce high-quality masks as pseudo-labels. And the refined pseudo-labels are further fed back to train the Match-based framework. SAM-Match can be trained in an end-to-end manner, facilitating interactions between the SAM- and Match-based models.</p><p><strong>Results: </strong>SAM-Match demonstrates robust performance across multiple medical imaging datasets, including the ACDC cardiac MRI dataset, the BUSI breast ultrasound dataset, and an in-house liver MRI dataset (MRLiver). We partitioned the datasets into training, validation, and test sets (70%, 10%, and 20% for ACDC; 60%, 9%, and 31% for BUSI; and 62%, 12%, and 25% for MRLiver). On ACDC, with only 3 labeled cases, we achieved a Dice score of 89.36% ± 0.06% on 20 test cases. For BUSI, using just 30 labeled samples for training, we attained a Dice score of 59.35% ± 0.12% on 170 test samples. On MRLiver, training with only 3 labeled cases resulted in a Dice score of 80.04% ± 0.11% on 12 test scans. Wilcoxon signed-rank tests with Bonferroni corrections between the SAM-Match framework and the other comparison methods further demonstrated the statistical significance of SAM-Match's improvement in segmentation accuracy.</p><p><strong>Conclusions: </strong>Our SAM-Match framework shows promising results in semi-supervised semantic segmentation, effectively tackling the challenges of automatic prompt generation for SAM and high-quality pseudo-label generation for Match-based models. It can help accelerate the adoption of semi-supervised learning in segmentation tasks, particularly in data-scarce scenarios. Our data and code will be made available at https://github.com/apple1986/SAMatch.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/mp.17785","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Semi-supervised segmentation leverages sparse annotation information to learn rich representations from combined labeled and label-less data for segmentation tasks. The Match-based framework, by using the consistency constraint of segmentation results from different models/augmented label-less inputs, is found effective in semi-supervised learning. This approach, however, is challenged by the low quality of pseudo-labels generated as intermediate products for training the network, due to the lack of the ''ground-truth'' reference.

Purpose: This study aims to leverage the foundation model, segment anything model (SAM), to assist unsupervised learning of Match-based frameworks. Trained with an extremely large dataset, SAM-based methods generalize better than traditional models to various imaging domains, allow it to serve as an assistant to Match-based frameworks to improve the quality of intermediate pseudo-labels for semi-supervised learning.

Methods: We propose SAM-Match, a SAM-guided and Match-based framework for semi-supervised medical image segmentation. Our approach involves two main steps: First, we use pretrained Match-based models to extract high-confidence predictions for prompt generation. Second, these prompts and unlabeled images are input into a fine-tuned SAM-based method to produce high-quality masks as pseudo-labels. And the refined pseudo-labels are further fed back to train the Match-based framework. SAM-Match can be trained in an end-to-end manner, facilitating interactions between the SAM- and Match-based models.

Results: SAM-Match demonstrates robust performance across multiple medical imaging datasets, including the ACDC cardiac MRI dataset, the BUSI breast ultrasound dataset, and an in-house liver MRI dataset (MRLiver). We partitioned the datasets into training, validation, and test sets (70%, 10%, and 20% for ACDC; 60%, 9%, and 31% for BUSI; and 62%, 12%, and 25% for MRLiver). On ACDC, with only 3 labeled cases, we achieved a Dice score of 89.36% ± 0.06% on 20 test cases. For BUSI, using just 30 labeled samples for training, we attained a Dice score of 59.35% ± 0.12% on 170 test samples. On MRLiver, training with only 3 labeled cases resulted in a Dice score of 80.04% ± 0.11% on 12 test scans. Wilcoxon signed-rank tests with Bonferroni corrections between the SAM-Match framework and the other comparison methods further demonstrated the statistical significance of SAM-Match's improvement in segmentation accuracy.

Conclusions: Our SAM-Match framework shows promising results in semi-supervised semantic segmentation, effectively tackling the challenges of automatic prompt generation for SAM and high-quality pseudo-label generation for Match-based models. It can help accelerate the adoption of semi-supervised learning in segmentation tasks, particularly in data-scarce scenarios. Our data and code will be made available at https://github.com/apple1986/SAMatch.

一种基于模型导向的医学影像半监督分割框架。
背景:半监督分割利用稀疏注释信息,从组合的有标签和无标签数据中学习丰富的表示,用于分割任务。基于匹配的框架通过使用不同模型/增强无标签输入分割结果的一致性约束,在半监督学习中被发现是有效的。然而,由于缺乏“基础事实”参考,这种方法受到了作为训练网络的中间产品生成的伪标签的低质量的挑战。目的:本研究旨在利用基础模型,细分任何模型(SAM)来辅助基于匹配的框架的无监督学习。经过超大数据集的训练,基于sam的方法比传统模型更好地泛化到各种成像领域,使其能够作为基于match的框架的助手,以提高半监督学习的中间伪标签的质量。方法:提出一种基于sam的半监督医学图像分割框架SAM-Match。我们的方法包括两个主要步骤:首先,我们使用预训练的基于匹配的模型来提取提示生成的高置信度预测。其次,这些提示和未标记的图像被输入到一个微调的基于sam的方法中,以产生高质量的掩码作为伪标签。并将改进后的伪标签进一步反馈训练基于匹配的框架。SAM- match可以以端到端的方式进行训练,从而促进基于SAM和基于match的模型之间的交互。结果:SAM-Match在多个医学成像数据集上表现出稳健的性能,包括ACDC心脏MRI数据集、BUSI乳房超声数据集和内部肝脏MRI数据集(MRLiver)。我们将数据集划分为训练集、验证集和测试集(ACDC为70%、10%和20%;商业投资占60%、9%和31%;MRLiver分别为62%、12%和25%)。在ACDC上,仅使用3个标记用例,我们在20个测试用例上获得了89.36%±0.06%的Dice得分。对于BUSI,仅使用30个标记样本进行训练,我们在170个测试样本上获得了59.35%±0.12%的Dice得分。在MRLiver上,仅使用3个标记病例进行训练,12次测试扫描的Dice得分为80.04%±0.11%。SAM-Match框架与其他比较方法的Bonferroni校正的Wilcoxon签名秩检验进一步证明了SAM-Match框架在分割精度上的提高具有统计学意义。结论:我们的SAM- match框架在半监督语义分割中显示出良好的效果,有效地解决了SAM的自动提示生成和基于匹配的模型的高质量伪标签生成的挑战。它可以帮助加速半监督学习在分割任务中的采用,特别是在数据稀缺的情况下。我们的数据和代码将在https://github.com/apple1986/SAMatch上提供。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信