A CSR-based visible and infrared image fusion method in low illumination conditions for sense and avoid

N. Ma, Y. Cao, Z. Zhang, Y. Fan, M. Ding
{"title":"A CSR-based visible and infrared image fusion method in low illumination conditions for sense and avoid","authors":"N. Ma, Y. Cao, Z. Zhang, Y. Fan, M. Ding","doi":"10.1017/aer.2023.51","DOIUrl":null,"url":null,"abstract":"\n Machine vision has been extensively researched in the field of unmanned aerial vehicles (UAV) recently. However, the ability of Sense and Avoid (SAA) largely limited by environmental visibility, which brings hazards to flight safety in low illumination or nighttime conditions. In order to solve this critical problem, an approach of image enhancement is proposed in this paper to improve image qualities in low illumination conditions. Considering the complementarity of visible and infrared images, a visible and infrared image fusion method based on convolutional sparse representation (CSR) is a promising solution to improve the SAA ability of UAVs. Firstly, the source image is decomposed into a texture layer and structure layer since infrared images are good at characterising structural information, and visible images have richer texture information. Both the structure and the texture layers are transformed into the sparse convolutional domain through the CSR mechanism, and then CSR coefficient mapping are fused via activity level assessment. Finally, the image is synthesised through the reconstruction results of the fusion texture and structure layers. In the experimental simulation section, a series of visible and infrared registered images including aerial targets are adopted to evaluate the proposed algorithm. Experimental results demonstrates that the proposed method increases image qualities in low illumination conditions effectively and can enhance the object details, which has better performance than traditional methods.","PeriodicalId":22567,"journal":{"name":"The Aeronautical Journal (1968)","volume":"100 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Aeronautical Journal (1968)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/aer.2023.51","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Machine vision has been extensively researched in the field of unmanned aerial vehicles (UAV) recently. However, the ability of Sense and Avoid (SAA) largely limited by environmental visibility, which brings hazards to flight safety in low illumination or nighttime conditions. In order to solve this critical problem, an approach of image enhancement is proposed in this paper to improve image qualities in low illumination conditions. Considering the complementarity of visible and infrared images, a visible and infrared image fusion method based on convolutional sparse representation (CSR) is a promising solution to improve the SAA ability of UAVs. Firstly, the source image is decomposed into a texture layer and structure layer since infrared images are good at characterising structural information, and visible images have richer texture information. Both the structure and the texture layers are transformed into the sparse convolutional domain through the CSR mechanism, and then CSR coefficient mapping are fused via activity level assessment. Finally, the image is synthesised through the reconstruction results of the fusion texture and structure layers. In the experimental simulation section, a series of visible and infrared registered images including aerial targets are adopted to evaluate the proposed algorithm. Experimental results demonstrates that the proposed method increases image qualities in low illumination conditions effectively and can enhance the object details, which has better performance than traditional methods.
基于csr的低照度可见光与红外图像融合方法
近年来,机器视觉技术在无人机领域得到了广泛的研究。然而,在低照度或夜间条件下,感知和避免(SAA)的能力在很大程度上受到环境能见度的限制,这给飞行安全带来了危害。为了解决这一关键问题,本文提出了一种图像增强方法来改善低照度条件下的图像质量。考虑到可见光和红外图像的互补性,基于卷积稀疏表示(CSR)的可见光和红外图像融合方法是提高无人机SAA能力的一种很有前途的解决方案。首先,将源图像分解为纹理层和结构层,因为红外图像善于表征结构信息,而可见光图像具有更丰富的纹理信息。通过CSR机制将结构层和纹理层转换为稀疏卷积域,然后通过活动等级评估融合CSR系数映射。最后,通过融合纹理层和结构层的重建结果合成图像。在实验仿真部分,采用一系列包含空中目标的可见光和红外配准图像对所提出的算法进行了评估。实验结果表明,该方法能有效提高低照度条件下的图像质量,增强目标细节,具有比传统方法更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信