Vox-MMSD: Voxel-wise Multi-scale and Multi-modal Self-Distillation for Self-supervised Brain Tumor Segmentation.

IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Yubo Zhou, Jianghao Wu, Jia Fu, Qiang Yue, Wenjun Liao, Shichuan Zhang, Shaoting Zhang, Guotai Wang
{"title":"Vox-MMSD: Voxel-wise Multi-scale and Multi-modal Self-Distillation for Self-supervised Brain Tumor Segmentation.","authors":"Yubo Zhou, Jianghao Wu, Jia Fu, Qiang Yue, Wenjun Liao, Shichuan Zhang, Shaoting Zhang, Guotai Wang","doi":"10.1109/JBHI.2025.3592116","DOIUrl":null,"url":null,"abstract":"<p><p>Many deep learning methods have been proposed for brain tumor segmentation from multi-modal Magnetic Resonance Imaging (MRI) scans that are important for accurate diagnosis and treatment planning. However, supervised learning needs a large amount of labeled data to perform well, where the time-consuming and expensive annotation process or small size of training set will limit the model's performance. To deal with these problems, self-supervised pre-training is an appealing solution due to its feature learning ability from a set of unlabeled images that is transferable to downstream datasets with a small size. However, existing methods often overlook the utilization of multi-modal information and multi-scale features. Therefore, we propose a novel Self-Supervised Learning (SSL) framework that fully leverages multi-modal MRI scans to extract modality-invariant features for brain tumor segmentation. First, we employ a Siamese Block-wise Modality Masking (SiaBloMM) strategy that creates more diverse model inputs for image restoration to simultaneously learn contextual and modality-invariant features. Meanwhile, we proposed Overlapping Random Modality Sampling (ORMS) to sample voxel pairs with multi-scale features for self-distillation, enhancing voxel-wise representation which is important for segmentation tasks. Experiments on the BraTS 2024 adult glioma segmentation dataset showed that with a small amount of labeled data for fine-tuning, our method improved the average Dice by 3.80 percentage points. In addition, when transferred to three other small downstream datasets with brain tumors from different patient groups, our method also improved the dice by 3.47 percentage points on average, and outperformed several existing SSL methods. The code is availiable at https://github.com/HiLab-git/Vox-MMSD.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8000,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2025.3592116","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Many deep learning methods have been proposed for brain tumor segmentation from multi-modal Magnetic Resonance Imaging (MRI) scans that are important for accurate diagnosis and treatment planning. However, supervised learning needs a large amount of labeled data to perform well, where the time-consuming and expensive annotation process or small size of training set will limit the model's performance. To deal with these problems, self-supervised pre-training is an appealing solution due to its feature learning ability from a set of unlabeled images that is transferable to downstream datasets with a small size. However, existing methods often overlook the utilization of multi-modal information and multi-scale features. Therefore, we propose a novel Self-Supervised Learning (SSL) framework that fully leverages multi-modal MRI scans to extract modality-invariant features for brain tumor segmentation. First, we employ a Siamese Block-wise Modality Masking (SiaBloMM) strategy that creates more diverse model inputs for image restoration to simultaneously learn contextual and modality-invariant features. Meanwhile, we proposed Overlapping Random Modality Sampling (ORMS) to sample voxel pairs with multi-scale features for self-distillation, enhancing voxel-wise representation which is important for segmentation tasks. Experiments on the BraTS 2024 adult glioma segmentation dataset showed that with a small amount of labeled data for fine-tuning, our method improved the average Dice by 3.80 percentage points. In addition, when transferred to three other small downstream datasets with brain tumors from different patient groups, our method also improved the dice by 3.47 percentage points on average, and outperformed several existing SSL methods. The code is availiable at https://github.com/HiLab-git/Vox-MMSD.

基于体素的多尺度多模态自蒸馏的自监督脑肿瘤分割。
许多深度学习方法已被提出用于从多模态磁共振成像(MRI)扫描中分割脑肿瘤,这对于准确诊断和治疗计划非常重要。然而,监督学习需要大量的标记数据才能表现良好,而耗时昂贵的标注过程或较小的训练集规模将限制模型的性能。为了解决这些问题,自监督预训练是一种很有吸引力的解决方案,因为它具有从一组未标记图像中学习特征的能力,可以转移到较小规模的下游数据集。然而,现有的方法往往忽略了对多模态信息和多尺度特征的利用。因此,我们提出了一种新的自监督学习(SSL)框架,该框架充分利用多模态MRI扫描来提取用于脑肿瘤分割的模态不变特征。首先,我们采用Siamese块模态掩蔽(SiaBloMM)策略,为图像恢复创建更多样化的模型输入,同时学习上下文和模态不变的特征。同时,我们提出了重叠随机模态采样(ORMS),对具有多尺度特征的体素对进行自蒸馏,增强了对分割任务重要的体素表示。在BraTS 2024成人胶质瘤分割数据集上的实验表明,在少量标记数据进行微调的情况下,我们的方法将平均Dice提高了3.80个百分点。此外,当转移到来自不同患者组的其他三个小型下游脑肿瘤数据集时,我们的方法也将dice平均提高了3.47个百分点,并且优于现有的几种SSL方法。代码可在https://github.com/HiLab-git/Vox-MMSD上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Journal of Biomedical and Health Informatics
IEEE Journal of Biomedical and Health Informatics COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
CiteScore
13.60
自引率
6.50%
发文量
1151
期刊介绍: IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信