{"title":"红外与可见光图像融合的自监督方法","authors":"Xiaopeng Lin, Guanxing Zhou, Weihong Zeng, Xiaotong Tu, Yue Huang, Xinghao Ding","doi":"10.1109/ICIP46576.2022.9897731","DOIUrl":null,"url":null,"abstract":"Infrared and visible image fusion (IVIF) plays important roles in many applications. Since there is no ground-truth, the fusion performance measurement is a difficult but important problem for the task. Previous unsupervised deep learning based fusion methods depend on a hand-crafted loss function to define the distance between the fused image and two types of source images, which still cannot well preserve the vital information in the fused images. To address these issues, we propose an image fusion performance measurement between the fused image and the decomposition of the fused image. A novel self-supervised network for infrared and visible image fusion is designed to preserve the vital information of source images by narrowing the distance between the source images and the decomposed ones. Extensive experimental results demonstrate that our proposed measurement has the ability in improving the performance of backbone network in both subjective and objective evaluations.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Self-Supervised Method for Infrared and Visible Image Fusion\",\"authors\":\"Xiaopeng Lin, Guanxing Zhou, Weihong Zeng, Xiaotong Tu, Yue Huang, Xinghao Ding\",\"doi\":\"10.1109/ICIP46576.2022.9897731\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Infrared and visible image fusion (IVIF) plays important roles in many applications. Since there is no ground-truth, the fusion performance measurement is a difficult but important problem for the task. Previous unsupervised deep learning based fusion methods depend on a hand-crafted loss function to define the distance between the fused image and two types of source images, which still cannot well preserve the vital information in the fused images. To address these issues, we propose an image fusion performance measurement between the fused image and the decomposition of the fused image. A novel self-supervised network for infrared and visible image fusion is designed to preserve the vital information of source images by narrowing the distance between the source images and the decomposed ones. Extensive experimental results demonstrate that our proposed measurement has the ability in improving the performance of backbone network in both subjective and objective evaluations.\",\"PeriodicalId\":387035,\"journal\":{\"name\":\"2022 IEEE International Conference on Image Processing (ICIP)\",\"volume\":\"97 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP46576.2022.9897731\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP46576.2022.9897731","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Self-Supervised Method for Infrared and Visible Image Fusion
Infrared and visible image fusion (IVIF) plays important roles in many applications. Since there is no ground-truth, the fusion performance measurement is a difficult but important problem for the task. Previous unsupervised deep learning based fusion methods depend on a hand-crafted loss function to define the distance between the fused image and two types of source images, which still cannot well preserve the vital information in the fused images. To address these issues, we propose an image fusion performance measurement between the fused image and the decomposition of the fused image. A novel self-supervised network for infrared and visible image fusion is designed to preserve the vital information of source images by narrowing the distance between the source images and the decomposed ones. Extensive experimental results demonstrate that our proposed measurement has the ability in improving the performance of backbone network in both subjective and objective evaluations.