{"title":"MGRCFusion: An infrared and visible image fusion network based on multi-scale group residual convolution","authors":"Pan Zhu, Yufei Yin, Xinglin Zhou","doi":"10.1016/j.optlastec.2024.111576","DOIUrl":null,"url":null,"abstract":"The purpose of fusing infrared and visible images is to obtain an informative image that contains bright thermal targets and rich visible texture details. However, the existing deep learning-based algorithms generally neglect finer deep-level multi-scale features, and only the last layer of features is injected into the feature fusion strategy. To this end, we propose an optimized network model for deeper-level multi-scale features extraction based on multi-scale group residual convolution. Meanwhile, a dense connection module is designed to adequately integrate these multi-scale feature information. We contrast our method with advanced deep learning-based algorithms on multiple datasets. Extensive qualitative and quantitative experiments reveal that our method surpasses the existing fusion methods. Furthermore, ablation experiments illustrate the excellence of the multi-scale group residual convolution module for infrared and visible image fusion.","PeriodicalId":19597,"journal":{"name":"Optics & Laser Technology","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics & Laser Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.optlastec.2024.111576","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The purpose of fusing infrared and visible images is to obtain an informative image that contains bright thermal targets and rich visible texture details. However, the existing deep learning-based algorithms generally neglect finer deep-level multi-scale features, and only the last layer of features is injected into the feature fusion strategy. To this end, we propose an optimized network model for deeper-level multi-scale features extraction based on multi-scale group residual convolution. Meanwhile, a dense connection module is designed to adequately integrate these multi-scale feature information. We contrast our method with advanced deep learning-based algorithms on multiple datasets. Extensive qualitative and quantitative experiments reveal that our method surpasses the existing fusion methods. Furthermore, ablation experiments illustrate the excellence of the multi-scale group residual convolution module for infrared and visible image fusion.