{"title":"基于对抗学习的跨尺度渐进交互网络红外和可见光图像融合","authors":"Zihao Zhang, Jian Zhou, Junyi Shi, Jian Lu","doi":"10.1016/j.compeleceng.2025.110672","DOIUrl":null,"url":null,"abstract":"<div><div>The objective of infrared and visible image fusion is to synthesize a single fused image that retains the salient target features and texture details of the source image. However, existing image fusion algorithms have not yet fully considered the intrinsic depth characteristics of images, ignoring the correlation between their information at different scales, thus limiting the fusion performance. Toward this end, we propose a cross-scale progressively interacting adversarial fusion network, called CPIGAN. In particular, in the generator, we design a progressively interacting feature extractor, which consists of the dual-stream gradient residual enhancement module (DGREM) and the multimodal cross perception module (MCPM). This design not only achieves feature-level texture enhancement, but also facilitates the full interaction of relevant and complementary information of multimodal images at different scales. Furthermore, we propose a cross-scale cross-fusion strategy that combines global and local attention models. It enables the accurate capture of local details at the spatial level while providing a comprehensive grasp of global information at the channel level. Extensive experiments show that our CPIGAN outperforms other advanced methods in subjective and objective evaluations. Meanwhile, we demonstrate the superiority of our method by evaluating it in the downstream task of object detection.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"128 ","pages":"Article 110672"},"PeriodicalIF":4.9000,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CPIGAN: Infrared and visible image fusion via cross-scale progressive interaction network with adversarial learning\",\"authors\":\"Zihao Zhang, Jian Zhou, Junyi Shi, Jian Lu\",\"doi\":\"10.1016/j.compeleceng.2025.110672\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The objective of infrared and visible image fusion is to synthesize a single fused image that retains the salient target features and texture details of the source image. However, existing image fusion algorithms have not yet fully considered the intrinsic depth characteristics of images, ignoring the correlation between their information at different scales, thus limiting the fusion performance. Toward this end, we propose a cross-scale progressively interacting adversarial fusion network, called CPIGAN. In particular, in the generator, we design a progressively interacting feature extractor, which consists of the dual-stream gradient residual enhancement module (DGREM) and the multimodal cross perception module (MCPM). This design not only achieves feature-level texture enhancement, but also facilitates the full interaction of relevant and complementary information of multimodal images at different scales. Furthermore, we propose a cross-scale cross-fusion strategy that combines global and local attention models. It enables the accurate capture of local details at the spatial level while providing a comprehensive grasp of global information at the channel level. Extensive experiments show that our CPIGAN outperforms other advanced methods in subjective and objective evaluations. Meanwhile, we demonstrate the superiority of our method by evaluating it in the downstream task of object detection.</div></div>\",\"PeriodicalId\":50630,\"journal\":{\"name\":\"Computers & Electrical Engineering\",\"volume\":\"128 \",\"pages\":\"Article 110672\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Electrical Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0045790625006159\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625006159","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
CPIGAN: Infrared and visible image fusion via cross-scale progressive interaction network with adversarial learning
The objective of infrared and visible image fusion is to synthesize a single fused image that retains the salient target features and texture details of the source image. However, existing image fusion algorithms have not yet fully considered the intrinsic depth characteristics of images, ignoring the correlation between their information at different scales, thus limiting the fusion performance. Toward this end, we propose a cross-scale progressively interacting adversarial fusion network, called CPIGAN. In particular, in the generator, we design a progressively interacting feature extractor, which consists of the dual-stream gradient residual enhancement module (DGREM) and the multimodal cross perception module (MCPM). This design not only achieves feature-level texture enhancement, but also facilitates the full interaction of relevant and complementary information of multimodal images at different scales. Furthermore, we propose a cross-scale cross-fusion strategy that combines global and local attention models. It enables the accurate capture of local details at the spatial level while providing a comprehensive grasp of global information at the channel level. Extensive experiments show that our CPIGAN outperforms other advanced methods in subjective and objective evaluations. Meanwhile, we demonstrate the superiority of our method by evaluating it in the downstream task of object detection.
期刊介绍:
The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency.
Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.