Jiantao Qu , Dongjin Huang , Yongsheng Shi , Jinhua Liu , Wen Tang
{"title":"Entropy-aware dynamic path selection network for multi-modality medical image fusion","authors":"Jiantao Qu , Dongjin Huang , Yongsheng Shi , Jinhua Liu , Wen Tang","doi":"10.1016/j.inffus.2025.103312","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning has achieved significant success in multi-modality medical image fusion (MMIF). Nevertheless, the distribution of spatial information varies across regions within a medical image. Current methods consider the medical image as a whole, leading to uneven fusion and susceptibility to artifacts in edge regions. To address this problem,we delve into regional information fusion and introduce an entropy-aware dynamic path selection network (EDPSN). Specifically, we introduce a novel edge enhancement module (EEM) to mitigate artifacts in edge regions through central concentration gradient (CCG). Additionally, an entropy-aware division (ED) module is designed to delineate the spatial information levels of distinct regions in the image through entropy convolution. Finally, a dynamic path selection (DPS) module is introduced to enable adaptive fusion of diverse spatial information regions. Experimental comparisons with some state-of-the-art image fusion methods illustrate the outstanding performance of the EDPSN in three datasets encompassing MRI-CT, MRI-PET, and MRI-SPECT. Moreover, the robustness of the proposed method is validated on the CHAOS dataset, and the clinical value of the proposed method is validated by sixteen doctors and medical students.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103312"},"PeriodicalIF":14.7000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525003859","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning has achieved significant success in multi-modality medical image fusion (MMIF). Nevertheless, the distribution of spatial information varies across regions within a medical image. Current methods consider the medical image as a whole, leading to uneven fusion and susceptibility to artifacts in edge regions. To address this problem,we delve into regional information fusion and introduce an entropy-aware dynamic path selection network (EDPSN). Specifically, we introduce a novel edge enhancement module (EEM) to mitigate artifacts in edge regions through central concentration gradient (CCG). Additionally, an entropy-aware division (ED) module is designed to delineate the spatial information levels of distinct regions in the image through entropy convolution. Finally, a dynamic path selection (DPS) module is introduced to enable adaptive fusion of diverse spatial information regions. Experimental comparisons with some state-of-the-art image fusion methods illustrate the outstanding performance of the EDPSN in three datasets encompassing MRI-CT, MRI-PET, and MRI-SPECT. Moreover, the robustness of the proposed method is validated on the CHAOS dataset, and the clinical value of the proposed method is validated by sixteen doctors and medical students.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.