基于多尺度混合单向全变的红外与视觉图像融合

Yi Wang, Zhonghua Luo, Zhi-hai Xu, H. Feng, Qi Li, Yue-ting Chen
{"title":"基于多尺度混合单向全变的红外与视觉图像融合","authors":"Yi Wang, Zhonghua Luo, Zhi-hai Xu, H. Feng, Qi Li, Yue-ting Chen","doi":"10.1109/SIPROCESS.2016.7888220","DOIUrl":null,"url":null,"abstract":"As an important research area in image analysis and computer vision, fusion of infrared and visible images aims at delivering an effective combination of image information from different sensors. Since the final fused image is the demonstration of fusion process, it should reveal both source images' vital information distinctly. To achieve this purpose, an image fusion method based on multiscale hybrid unidirectional total variation (MHUTV) and visual weight map(VWM) is proposed in this paper. The MHUTV combines the feature of extracting the details from images and the capacity of suppressing stripe noise, which leads to a more ideal visual effect. The MHUTV is a multiscale, unidirectional and self-adaption image decomposition method, which is used to fuse infrared and visible images in this paper. The visual weight map aims to reveal attention drawing distribution of human observer. It provides a subband fusion criterion, which can guarantee the highlighting of interesting area from infrared and visible images. Firstly, multiscale hybrid unidirectional total variation is discussed and used to decompose the source images into approximation subbands and detail subbands. Secondly, the approximation and details subbands are respectively fused by a fusion rule based on visual weight map. Finally, the fused subbands are combined into one image by implementing inverse MHUTV. The results of comparison experiments on different sets of images demonstrate the effectiveness of the proposed method.","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Fusion of infrared and visual images through multiscale hybrid unidirectional total variation\",\"authors\":\"Yi Wang, Zhonghua Luo, Zhi-hai Xu, H. Feng, Qi Li, Yue-ting Chen\",\"doi\":\"10.1109/SIPROCESS.2016.7888220\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As an important research area in image analysis and computer vision, fusion of infrared and visible images aims at delivering an effective combination of image information from different sensors. Since the final fused image is the demonstration of fusion process, it should reveal both source images' vital information distinctly. To achieve this purpose, an image fusion method based on multiscale hybrid unidirectional total variation (MHUTV) and visual weight map(VWM) is proposed in this paper. The MHUTV combines the feature of extracting the details from images and the capacity of suppressing stripe noise, which leads to a more ideal visual effect. The MHUTV is a multiscale, unidirectional and self-adaption image decomposition method, which is used to fuse infrared and visible images in this paper. The visual weight map aims to reveal attention drawing distribution of human observer. It provides a subband fusion criterion, which can guarantee the highlighting of interesting area from infrared and visible images. Firstly, multiscale hybrid unidirectional total variation is discussed and used to decompose the source images into approximation subbands and detail subbands. Secondly, the approximation and details subbands are respectively fused by a fusion rule based on visual weight map. Finally, the fused subbands are combined into one image by implementing inverse MHUTV. The results of comparison experiments on different sets of images demonstrate the effectiveness of the proposed method.\",\"PeriodicalId\":142802,\"journal\":{\"name\":\"2016 IEEE International Conference on Signal and Image Processing (ICSIP)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE International Conference on Signal and Image Processing (ICSIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SIPROCESS.2016.7888220\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIPROCESS.2016.7888220","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

红外与可见光图像融合是图像分析和计算机视觉领域的一个重要研究领域,其目的是实现不同传感器图像信息的有效组合。由于最终的融合图像是融合过程的演示,因此它应该清晰地显示两个源图像的重要信息。为此,提出了一种基于多尺度混合单向全变差(MHUTV)和视觉权重图(VWM)的图像融合方法。MHUTV结合了从图像中提取细节的特点和抑制条纹噪声的能力,使视觉效果更加理想。MHUTV是一种多尺度、单向、自适应的图像分解方法,用于融合红外和可见光图像。视觉权重图旨在揭示人类观察者的注意力吸引分布。它提供了一种子带融合准则,可以保证红外和可见光图像中感兴趣区域的突出。首先,讨论了多尺度混合单向全变分,并将源图像分解为近似子带和细节子带;其次,采用基于视觉权重图的融合规则分别融合近似子带和细节子带;最后,通过逆MHUTV将融合子带合并成一幅图像。在不同图像集上的对比实验结果验证了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fusion of infrared and visual images through multiscale hybrid unidirectional total variation
As an important research area in image analysis and computer vision, fusion of infrared and visible images aims at delivering an effective combination of image information from different sensors. Since the final fused image is the demonstration of fusion process, it should reveal both source images' vital information distinctly. To achieve this purpose, an image fusion method based on multiscale hybrid unidirectional total variation (MHUTV) and visual weight map(VWM) is proposed in this paper. The MHUTV combines the feature of extracting the details from images and the capacity of suppressing stripe noise, which leads to a more ideal visual effect. The MHUTV is a multiscale, unidirectional and self-adaption image decomposition method, which is used to fuse infrared and visible images in this paper. The visual weight map aims to reveal attention drawing distribution of human observer. It provides a subband fusion criterion, which can guarantee the highlighting of interesting area from infrared and visible images. Firstly, multiscale hybrid unidirectional total variation is discussed and used to decompose the source images into approximation subbands and detail subbands. Secondly, the approximation and details subbands are respectively fused by a fusion rule based on visual weight map. Finally, the fused subbands are combined into one image by implementing inverse MHUTV. The results of comparison experiments on different sets of images demonstrate the effectiveness of the proposed method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信