{"title":"CFIFusion:用于医学图像融合的双分支互补特征注入网络","authors":"Yiyuan Xie, Lei Yu, Cheng Ding","doi":"10.1002/ima.23144","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The goal of fusing medical images is to integrate the diverse information that multimodal medical images hold. However, the challenges lie in the limitations of imaging sensors and the issue of incomplete modal information retention, which make it difficult to produce images encompassing both functional and anatomical information. To overcome these obstacles, several medical image fusion techniques based on CNN or transformer architectures have been presented. Nevertheless, CNN technique struggles to establish extensive dependencies between the fused and source images, and transformer architecture often overlooks shallow complementary features. To augment both the feature extraction capacity and the stability of the model, we introduce a framework, called dual-branch complementary feature injection fusion (CFIFusion) technique, a for multimodal medical image fusion framework that combines unsupervised models of CNN model and transformer techniques. Specifically, in our framework, the entire source image and segmented source image are input into an adaptive backbone network to learn global and local features, respectively. To further retain the source images' complementary information, we design a multi-scale complementary feature extraction framework as an auxiliary module, focusing on calculating feature differences at each level to capture the shallow complementary information. Then, we design a shallow information preservation module tailored for sliced image characteristics. Experimental results on the Harvard whole brain atlas dataset demonstrate that CFIFusion shows greater benefits than recent state-of-the-art algorithms in terms of both subjective and objective evaluations.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CFIFusion: Dual-Branch Complementary Feature Injection Network for Medical Image Fusion\",\"authors\":\"Yiyuan Xie, Lei Yu, Cheng Ding\",\"doi\":\"10.1002/ima.23144\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>The goal of fusing medical images is to integrate the diverse information that multimodal medical images hold. However, the challenges lie in the limitations of imaging sensors and the issue of incomplete modal information retention, which make it difficult to produce images encompassing both functional and anatomical information. To overcome these obstacles, several medical image fusion techniques based on CNN or transformer architectures have been presented. Nevertheless, CNN technique struggles to establish extensive dependencies between the fused and source images, and transformer architecture often overlooks shallow complementary features. To augment both the feature extraction capacity and the stability of the model, we introduce a framework, called dual-branch complementary feature injection fusion (CFIFusion) technique, a for multimodal medical image fusion framework that combines unsupervised models of CNN model and transformer techniques. Specifically, in our framework, the entire source image and segmented source image are input into an adaptive backbone network to learn global and local features, respectively. To further retain the source images' complementary information, we design a multi-scale complementary feature extraction framework as an auxiliary module, focusing on calculating feature differences at each level to capture the shallow complementary information. Then, we design a shallow information preservation module tailored for sliced image characteristics. Experimental results on the Harvard whole brain atlas dataset demonstrate that CFIFusion shows greater benefits than recent state-of-the-art algorithms in terms of both subjective and objective evaluations.</p>\\n </div>\",\"PeriodicalId\":14027,\"journal\":{\"name\":\"International Journal of Imaging Systems and Technology\",\"volume\":\"34 4\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Imaging Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ima.23144\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.23144","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
CFIFusion: Dual-Branch Complementary Feature Injection Network for Medical Image Fusion
The goal of fusing medical images is to integrate the diverse information that multimodal medical images hold. However, the challenges lie in the limitations of imaging sensors and the issue of incomplete modal information retention, which make it difficult to produce images encompassing both functional and anatomical information. To overcome these obstacles, several medical image fusion techniques based on CNN or transformer architectures have been presented. Nevertheless, CNN technique struggles to establish extensive dependencies between the fused and source images, and transformer architecture often overlooks shallow complementary features. To augment both the feature extraction capacity and the stability of the model, we introduce a framework, called dual-branch complementary feature injection fusion (CFIFusion) technique, a for multimodal medical image fusion framework that combines unsupervised models of CNN model and transformer techniques. Specifically, in our framework, the entire source image and segmented source image are input into an adaptive backbone network to learn global and local features, respectively. To further retain the source images' complementary information, we design a multi-scale complementary feature extraction framework as an auxiliary module, focusing on calculating feature differences at each level to capture the shallow complementary information. Then, we design a shallow information preservation module tailored for sliced image characteristics. Experimental results on the Harvard whole brain atlas dataset demonstrate that CFIFusion shows greater benefits than recent state-of-the-art algorithms in terms of both subjective and objective evaluations.
期刊介绍:
The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals.
IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging.
The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered.
The scope of the journal includes, but is not limited to, the following in the context of biomedical research:
Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.;
Neuromodulation and brain stimulation techniques such as TMS and tDCS;
Software and hardware for imaging, especially related to human and animal health;
Image segmentation in normal and clinical populations;
Pattern analysis and classification using machine learning techniques;
Computational modeling and analysis;
Brain connectivity and connectomics;
Systems-level characterization of brain function;
Neural networks and neurorobotics;
Computer vision, based on human/animal physiology;
Brain-computer interface (BCI) technology;
Big data, databasing and data mining.