{"title":"ECINFusion: A Novel Explicit Channel-Wise Interaction Network for Unified Multi-Modal Medical Image Fusion","authors":"Xinjian Wei;Yu Qiu;Xiaoxuan Xu;Jing Xu;Jie Mei;Jun Zhang","doi":"10.1109/TCSVT.2024.3516705","DOIUrl":null,"url":null,"abstract":"Multi-modal medical image fusion enhance the representation, aggregation and comprehension of functional and structural information, improving accuracy and efficiency for subsequent analysis. However, lacking explicit cross channel modeling and interaction among modalities results in the loss of details and artifacts. To this end, we propose a novel <underline>E</u>xplicit <underline>C</u>hannel-wise <underline>I</u>nteraction <underline>N</u>etwork for unified multi-modal medical image <underline>Fusion</u>, namely ECINFusion. ECINFusion encompasses two components: multi-scale adaptive feature modeling (MAFM) and explicit channel-wise interaction mechanism (ECIM). MAFM leverages adaptive parallel convolution and transformer in multi-scale manner to achieve the global context-aware feature representation. ECIM utilizes the designed multi-head channel-attention mechanism for explicit modeling in channel dimension to accomplish the cross-modal interaction. Besides, we introduce a novel adaptive L-Norm loss, preserving fine-grained details. Experiments demonstrate ECINFusion outperforms state-of-the-art approaches in various medical fusion sub-tasks on different metrics. Furthermore, extended experiments reveal the robust generalization of the proposed in different fusion tasks. In breif, the proposed explicit channel-wise interaction mechanism provides new insight for multi-modal interaction.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"4011-4025"},"PeriodicalIF":8.3000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10795252/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-modal medical image fusion enhance the representation, aggregation and comprehension of functional and structural information, improving accuracy and efficiency for subsequent analysis. However, lacking explicit cross channel modeling and interaction among modalities results in the loss of details and artifacts. To this end, we propose a novel Explicit Channel-wise Interaction Network for unified multi-modal medical image Fusion, namely ECINFusion. ECINFusion encompasses two components: multi-scale adaptive feature modeling (MAFM) and explicit channel-wise interaction mechanism (ECIM). MAFM leverages adaptive parallel convolution and transformer in multi-scale manner to achieve the global context-aware feature representation. ECIM utilizes the designed multi-head channel-attention mechanism for explicit modeling in channel dimension to accomplish the cross-modal interaction. Besides, we introduce a novel adaptive L-Norm loss, preserving fine-grained details. Experiments demonstrate ECINFusion outperforms state-of-the-art approaches in various medical fusion sub-tasks on different metrics. Furthermore, extended experiments reveal the robust generalization of the proposed in different fusion tasks. In breif, the proposed explicit channel-wise interaction mechanism provides new insight for multi-modal interaction.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.