Manoj K. Panda, B. Subudhi, T. Veerakumar, V. Jakhetiya
{"title":"Integration of Bi-dimensional Empirical Mode Decomposition With Two Streams Deep Learning Network for Infrared and Visible Image Fusion","authors":"Manoj K. Panda, B. Subudhi, T. Veerakumar, V. Jakhetiya","doi":"10.23919/eusipco55093.2022.9909631","DOIUrl":null,"url":null,"abstract":"Image fusion is a technique that combines the complementary details from the images captured from different sensors into a single image with high perception capability. In the fusion process, the significant details from different source images are combined in a meaningful way. In this article, we propose a unique and first effort of infrared and visible image fusion technique with bi-dimensional empirical mode decomposition (BEMD) induced VGG-16 deep neural network. The proposed BEMD strategy is incorporated with a pre-trained VGG-16 network that can effectively handle the vagueness of infrared and visible images and retain deep multi-layer features at different scales on the frequency domain. A novel fusion strategy is proposed here to analyze the spatial inter-dependency between these features and precisely preserve the correlative information from the source images. The minimum selection strategy is explored in the proposed algorithm to keep the standard details with reduced artifacts in the fused image. The competency of the proposed algorithm is estimated using qualitative and quantitative assessments. The efficiency of the proposed technique is corroborated against fifteen existing state-of-the-art fusion techniques and found to be efficient.","PeriodicalId":231263,"journal":{"name":"2022 30th European Signal Processing Conference (EUSIPCO)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 30th European Signal Processing Conference (EUSIPCO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/eusipco55093.2022.9909631","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Image fusion is a technique that combines the complementary details from the images captured from different sensors into a single image with high perception capability. In the fusion process, the significant details from different source images are combined in a meaningful way. In this article, we propose a unique and first effort of infrared and visible image fusion technique with bi-dimensional empirical mode decomposition (BEMD) induced VGG-16 deep neural network. The proposed BEMD strategy is incorporated with a pre-trained VGG-16 network that can effectively handle the vagueness of infrared and visible images and retain deep multi-layer features at different scales on the frequency domain. A novel fusion strategy is proposed here to analyze the spatial inter-dependency between these features and precisely preserve the correlative information from the source images. The minimum selection strategy is explored in the proposed algorithm to keep the standard details with reduced artifacts in the fused image. The competency of the proposed algorithm is estimated using qualitative and quantitative assessments. The efficiency of the proposed technique is corroborated against fifteen existing state-of-the-art fusion techniques and found to be efficient.