G. He, Jiaqi Ji, Dandan Dong, Jun Wang, Jianping Fan
{"title":"Infrared and Visible Image Fusion Method by Using Hybrid Representation Learning","authors":"G. He, Jiaqi Ji, Dandan Dong, Jun Wang, Jianping Fan","doi":"10.1109/LGRS.2019.2907721","DOIUrl":null,"url":null,"abstract":"For remote sensing image fusion, infrared and visible images have very different brightness due to their disparate imaging mechanisms, the result of which is that nontarget regions in the infrared image often affect the fusion of details in the visible image. This letter proposes a novel infrared and visible image fusion method basing hybrid representation learning by combining dictionary-learning-based joint sparse representation (JSR) and nonnegative sparse representation (NNSR). In the proposed method, different fusion strategies are adopted, respectively, for the mean image, which represents the primary energy information, and for the deaveraged image, which contains important detail features. Since the deaveraged image contains a large amount of high-frequency details information of the source image, JSR is utilized to sparsely and accurately extract the common and innovation features of the deaveraged image, thus, accurately merging high-frequency details in the deaveraged image. Then, the mean image represents low-frequency and overview features of the source image, according to NNSR, mean image is classified well-directed to different feature regions and then fused, respectively. Such proposed method, on the one hand, can eliminate the impact on fusion result suffering from very different brightness causing by different imaging mechanism between infrared and visible image; on the other hand, it can improve the readability and accuracy of the result fusion image. Experimental result shows that, compared with the classical and state-of-the-art fusion methods, the proposed method not only can accurately integrate the infrared target but also has rich background details of the visible image, and the fusion effect is superior.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"16 1","pages":"1796-1800"},"PeriodicalIF":4.0000,"publicationDate":"2019-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2019.2907721","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Geoscience and Remote Sensing Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/LGRS.2019.2907721","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 11
Abstract
For remote sensing image fusion, infrared and visible images have very different brightness due to their disparate imaging mechanisms, the result of which is that nontarget regions in the infrared image often affect the fusion of details in the visible image. This letter proposes a novel infrared and visible image fusion method basing hybrid representation learning by combining dictionary-learning-based joint sparse representation (JSR) and nonnegative sparse representation (NNSR). In the proposed method, different fusion strategies are adopted, respectively, for the mean image, which represents the primary energy information, and for the deaveraged image, which contains important detail features. Since the deaveraged image contains a large amount of high-frequency details information of the source image, JSR is utilized to sparsely and accurately extract the common and innovation features of the deaveraged image, thus, accurately merging high-frequency details in the deaveraged image. Then, the mean image represents low-frequency and overview features of the source image, according to NNSR, mean image is classified well-directed to different feature regions and then fused, respectively. Such proposed method, on the one hand, can eliminate the impact on fusion result suffering from very different brightness causing by different imaging mechanism between infrared and visible image; on the other hand, it can improve the readability and accuracy of the result fusion image. Experimental result shows that, compared with the classical and state-of-the-art fusion methods, the proposed method not only can accurately integrate the infrared target but also has rich background details of the visible image, and the fusion effect is superior.
期刊介绍:
IEEE Geoscience and Remote Sensing Letters (GRSL) is a monthly publication for short papers (maximum length 5 pages) addressing new ideas and formative concepts in remote sensing as well as important new and timely results and concepts. Papers should relate to the theory, concepts and techniques of science and engineering as applied to sensing the earth, oceans, atmosphere, and space, and the processing, interpretation, and dissemination of this information. The technical content of papers must be both new and significant. Experimental data must be complete and include sufficient description of experimental apparatus, methods, and relevant experimental conditions. GRSL encourages the incorporation of "extended objects" or "multimedia" such as animations to enhance the shorter papers.