{"title":"用于多模态图像融合的多尺度扩散变压器","authors":"Caifeng Xia;Hongwei Gao;Wei Yang;Jiahui Yu","doi":"10.1109/TETCI.2025.3542146","DOIUrl":null,"url":null,"abstract":"Multimodal image fusion is a vital technique that integrates images from various sensors to create a comprehensive and coherent representation, with broad applications in surveillance, medical imaging, and autonomous driving. However, current fusion methods struggle with inadequate feature representation, limited global context understanding due to the small receptive fields of convolutional neural networks (CNNs), and the loss of high-frequency information, all of which lead to suboptimal fusion quality. To address these challenges, we propose the Multi-Scale Diffusion Transformer (MSDT), a novel fusion framework that seamlessly combines a latent diffusion model with a transformer-based architecture. MSDT uses a perceptual compression network to encode source images into a low-dimensional latent space, reducing computational complexity while preserving essential features. It also incorporates a multiscale feature fusion mechanism, enhancing both detail and structural understanding. Additionally, MSDT features a self-attention module to extract unique high-frequency features and a cross-attention module to identify common low-frequency features across modalities, improving contextual understanding. Extensive experiments on three datasets show that MSDT significantly outperforms state-of-the-art methods across twelve evaluation metrics, achieving an SSIM score of 0.98. Moreover, MSDT demonstrates superior robustness and generalizability, highlighting the potential of integrating diffusion models with transformer architectures for multimodal image fusion.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2269-2283"},"PeriodicalIF":5.3000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MSDT: Multiscale Diffusion Transformer for Multimodality Image Fusion\",\"authors\":\"Caifeng Xia;Hongwei Gao;Wei Yang;Jiahui Yu\",\"doi\":\"10.1109/TETCI.2025.3542146\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multimodal image fusion is a vital technique that integrates images from various sensors to create a comprehensive and coherent representation, with broad applications in surveillance, medical imaging, and autonomous driving. However, current fusion methods struggle with inadequate feature representation, limited global context understanding due to the small receptive fields of convolutional neural networks (CNNs), and the loss of high-frequency information, all of which lead to suboptimal fusion quality. To address these challenges, we propose the Multi-Scale Diffusion Transformer (MSDT), a novel fusion framework that seamlessly combines a latent diffusion model with a transformer-based architecture. MSDT uses a perceptual compression network to encode source images into a low-dimensional latent space, reducing computational complexity while preserving essential features. It also incorporates a multiscale feature fusion mechanism, enhancing both detail and structural understanding. Additionally, MSDT features a self-attention module to extract unique high-frequency features and a cross-attention module to identify common low-frequency features across modalities, improving contextual understanding. Extensive experiments on three datasets show that MSDT significantly outperforms state-of-the-art methods across twelve evaluation metrics, achieving an SSIM score of 0.98. Moreover, MSDT demonstrates superior robustness and generalizability, highlighting the potential of integrating diffusion models with transformer architectures for multimodal image fusion.\",\"PeriodicalId\":13135,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"volume\":\"9 3\",\"pages\":\"2269-2283\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2025-03-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10909311/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10909311/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
MSDT: Multiscale Diffusion Transformer for Multimodality Image Fusion
Multimodal image fusion is a vital technique that integrates images from various sensors to create a comprehensive and coherent representation, with broad applications in surveillance, medical imaging, and autonomous driving. However, current fusion methods struggle with inadequate feature representation, limited global context understanding due to the small receptive fields of convolutional neural networks (CNNs), and the loss of high-frequency information, all of which lead to suboptimal fusion quality. To address these challenges, we propose the Multi-Scale Diffusion Transformer (MSDT), a novel fusion framework that seamlessly combines a latent diffusion model with a transformer-based architecture. MSDT uses a perceptual compression network to encode source images into a low-dimensional latent space, reducing computational complexity while preserving essential features. It also incorporates a multiscale feature fusion mechanism, enhancing both detail and structural understanding. Additionally, MSDT features a self-attention module to extract unique high-frequency features and a cross-attention module to identify common low-frequency features across modalities, improving contextual understanding. Extensive experiments on three datasets show that MSDT significantly outperforms state-of-the-art methods across twelve evaluation metrics, achieving an SSIM score of 0.98. Moreover, MSDT demonstrates superior robustness and generalizability, highlighting the potential of integrating diffusion models with transformer architectures for multimodal image fusion.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.