Boyue Zhao , Yi Zhang , Meng Zhao , Guoxia Xu , Congcong Wang
{"title":"随机缺失模态下多模态脑肿瘤分割的自适应辅助扩散","authors":"Boyue Zhao , Yi Zhang , Meng Zhao , Guoxia Xu , Congcong Wang","doi":"10.1016/j.bspc.2025.108015","DOIUrl":null,"url":null,"abstract":"<div><div>Brain tumor segmentation methods based on multi-modal MRI perform significantly well when data is complete. In clinical settings, the absence of modalities due to artifacts and equipment problems often renders these methods ineffective. Current research attempts to train a universal model to adapt to 15 different random combinations of missing modalities. However, due to the random and complex nature of missing modality combinations across different cases, a single model faces challenges in dynamically adjusting its processing strategy to accommodate specific missing modality scenarios, ultimately leading to diminished segmentation accuracy. In this work, we introduce an end-to-end Incomplete Multi-modal Diffusion Brain Tumor Segmentation (IMD-TumorSeg) framework, which is designed to handle various scenarios with missing modalities. Specifically, the model incorporates independent generative modules for each modality and introduces an adaptive conditional integration mechanism to dynamically adjust the weight fusion between missing and available modalities. In addition, an attention-driven diffusion strategy is proposed to facilitate collaborative learning between the diffusion process and the segmentation network. Furthermore, by integrating an image estimator, the framework evaluates the similarity between generated and real images in real-time, optimizing the generation process and ensuring both visual and semantic consistency of the generated images. Extensive experimental results on the BraTS 2018 and BraTS 2020 datasets demonstrate that IMD-TumorSeg exhibits superior performance and effectiveness in handling missing modalities compared to state-of-the-art methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"109 ","pages":"Article 108015"},"PeriodicalIF":4.9000,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive auxiliary diffusion for multi-modal brain tumor segmentation with random missing modalities\",\"authors\":\"Boyue Zhao , Yi Zhang , Meng Zhao , Guoxia Xu , Congcong Wang\",\"doi\":\"10.1016/j.bspc.2025.108015\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Brain tumor segmentation methods based on multi-modal MRI perform significantly well when data is complete. In clinical settings, the absence of modalities due to artifacts and equipment problems often renders these methods ineffective. Current research attempts to train a universal model to adapt to 15 different random combinations of missing modalities. However, due to the random and complex nature of missing modality combinations across different cases, a single model faces challenges in dynamically adjusting its processing strategy to accommodate specific missing modality scenarios, ultimately leading to diminished segmentation accuracy. In this work, we introduce an end-to-end Incomplete Multi-modal Diffusion Brain Tumor Segmentation (IMD-TumorSeg) framework, which is designed to handle various scenarios with missing modalities. Specifically, the model incorporates independent generative modules for each modality and introduces an adaptive conditional integration mechanism to dynamically adjust the weight fusion between missing and available modalities. In addition, an attention-driven diffusion strategy is proposed to facilitate collaborative learning between the diffusion process and the segmentation network. Furthermore, by integrating an image estimator, the framework evaluates the similarity between generated and real images in real-time, optimizing the generation process and ensuring both visual and semantic consistency of the generated images. Extensive experimental results on the BraTS 2018 and BraTS 2020 datasets demonstrate that IMD-TumorSeg exhibits superior performance and effectiveness in handling missing modalities compared to state-of-the-art methods.</div></div>\",\"PeriodicalId\":55362,\"journal\":{\"name\":\"Biomedical Signal Processing and Control\",\"volume\":\"109 \",\"pages\":\"Article 108015\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-05-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Signal Processing and Control\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1746809425005269\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425005269","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Adaptive auxiliary diffusion for multi-modal brain tumor segmentation with random missing modalities
Brain tumor segmentation methods based on multi-modal MRI perform significantly well when data is complete. In clinical settings, the absence of modalities due to artifacts and equipment problems often renders these methods ineffective. Current research attempts to train a universal model to adapt to 15 different random combinations of missing modalities. However, due to the random and complex nature of missing modality combinations across different cases, a single model faces challenges in dynamically adjusting its processing strategy to accommodate specific missing modality scenarios, ultimately leading to diminished segmentation accuracy. In this work, we introduce an end-to-end Incomplete Multi-modal Diffusion Brain Tumor Segmentation (IMD-TumorSeg) framework, which is designed to handle various scenarios with missing modalities. Specifically, the model incorporates independent generative modules for each modality and introduces an adaptive conditional integration mechanism to dynamically adjust the weight fusion between missing and available modalities. In addition, an attention-driven diffusion strategy is proposed to facilitate collaborative learning between the diffusion process and the segmentation network. Furthermore, by integrating an image estimator, the framework evaluates the similarity between generated and real images in real-time, optimizing the generation process and ensuring both visual and semantic consistency of the generated images. Extensive experimental results on the BraTS 2018 and BraTS 2020 datasets demonstrate that IMD-TumorSeg exhibits superior performance and effectiveness in handling missing modalities compared to state-of-the-art methods.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.