{"title":"通过变换器实现多模型图像融合的交叉注意力交互学习网络","authors":"Jing Wang , Long Yu , Shengwei Tian","doi":"10.1016/j.engappai.2024.109583","DOIUrl":null,"url":null,"abstract":"<div><div>Current image fusion techniques often fail to adequately consider the inherent correlations among different modalities, resulting in suboptimal integration of multi-modal information. Drawing inspiration from inter-modal interactions, this paper introduces a cross-attention interaction learning network, CrossATF, leveraging the transformer architecture. The cornerstone of CrossATF resides in a generator network equipped with dual encoders. The multi-modal encoder incorporates two transformer modules of comparable computational complexity, alongside a meticulously designed cross-modal transformer. This architectural choice empowers the model to effectively extract modality-specific features while simultaneously integrating complementary features from diverse modalities. Furthermore, an auxiliary encoder is enlisted to encode features from the entire input image, thereby enhancing the model's comprehensive understanding of the image. Significantly, the loss function is tailored to selectively preserve a more targeted set of information from the source images, endowing the network with heightened feature extraction capabilities. Comprehensive experimental results across various datasets substantiate the promising performance of the proposed approach when compared to both task-specific methodologies and unified fusion frameworks.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-attention interaction learning network for multi-model image fusion via transformer\",\"authors\":\"Jing Wang , Long Yu , Shengwei Tian\",\"doi\":\"10.1016/j.engappai.2024.109583\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Current image fusion techniques often fail to adequately consider the inherent correlations among different modalities, resulting in suboptimal integration of multi-modal information. Drawing inspiration from inter-modal interactions, this paper introduces a cross-attention interaction learning network, CrossATF, leveraging the transformer architecture. The cornerstone of CrossATF resides in a generator network equipped with dual encoders. The multi-modal encoder incorporates two transformer modules of comparable computational complexity, alongside a meticulously designed cross-modal transformer. This architectural choice empowers the model to effectively extract modality-specific features while simultaneously integrating complementary features from diverse modalities. Furthermore, an auxiliary encoder is enlisted to encode features from the entire input image, thereby enhancing the model's comprehensive understanding of the image. Significantly, the loss function is tailored to selectively preserve a more targeted set of information from the source images, endowing the network with heightened feature extraction capabilities. Comprehensive experimental results across various datasets substantiate the promising performance of the proposed approach when compared to both task-specific methodologies and unified fusion frameworks.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S095219762401741X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S095219762401741X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Cross-attention interaction learning network for multi-model image fusion via transformer
Current image fusion techniques often fail to adequately consider the inherent correlations among different modalities, resulting in suboptimal integration of multi-modal information. Drawing inspiration from inter-modal interactions, this paper introduces a cross-attention interaction learning network, CrossATF, leveraging the transformer architecture. The cornerstone of CrossATF resides in a generator network equipped with dual encoders. The multi-modal encoder incorporates two transformer modules of comparable computational complexity, alongside a meticulously designed cross-modal transformer. This architectural choice empowers the model to effectively extract modality-specific features while simultaneously integrating complementary features from diverse modalities. Furthermore, an auxiliary encoder is enlisted to encode features from the entire input image, thereby enhancing the model's comprehensive understanding of the image. Significantly, the loss function is tailored to selectively preserve a more targeted set of information from the source images, endowing the network with heightened feature extraction capabilities. Comprehensive experimental results across various datasets substantiate the promising performance of the proposed approach when compared to both task-specific methodologies and unified fusion frameworks.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.