Knowledge-embedded multi-layer collaborative adaptive fusion network: Addressing challenges in foggy conditions and complex imaging

IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Zhu Chen, Fan Li, Yueqin Diao, Wanlong Zhao, Puyin Fan
{"title":"Knowledge-embedded multi-layer collaborative adaptive fusion network: Addressing challenges in foggy conditions and complex imaging","authors":"Zhu Chen,&nbsp;Fan Li,&nbsp;Yueqin Diao,&nbsp;Wanlong Zhao,&nbsp;Puyin Fan","doi":"10.1016/j.jksuci.2024.102230","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion aims at generating high-quality images that serve both human and machine visual perception under extreme imaging conditions. However, current fusion methods primarily rely on datasets comprising infrared and visible images captured under clear weather conditions. When applied to real-world scenarios, image fusion tasks inevitably encounter challenges posed by adverse weather conditions such as heavy fog, resulting in difficulties in obtaining effective information and inferior visual perception. To address these challenges, this paper proposes a Mean Teacher-based Self-supervised Image Restoration and multimodal Image Fusion joint learning network (SIRIFN), which enhances the robustness of the fusion network in adverse weather conditions by employing deep supervision from a guiding network to the learning network. Furthermore, to enhance the network’s information extraction and integration capabilities, our Multi-level Feature Collaborative adaptive Reconstruction Network (MFCRNet) is introduced, which adopts a multi-branch, multi-scale design, with differentiated processing strategies for different features. This approach preserves rich texture information while maintaining semantic consistency from the source images. Extensive experiments demonstrate that SIRIFN outperforms current state-of-the-art algorithms in both visual quality and quantitative evaluation. Specifically, the joint implementation of image restoration and multimodal fusion provides more effective information for visual tasks under extreme weather conditions, thereby facilitating downstream visual tasks.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102230"},"PeriodicalIF":5.2000,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of King Saud University-Computer and Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1319157824003197","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Infrared and visible image fusion aims at generating high-quality images that serve both human and machine visual perception under extreme imaging conditions. However, current fusion methods primarily rely on datasets comprising infrared and visible images captured under clear weather conditions. When applied to real-world scenarios, image fusion tasks inevitably encounter challenges posed by adverse weather conditions such as heavy fog, resulting in difficulties in obtaining effective information and inferior visual perception. To address these challenges, this paper proposes a Mean Teacher-based Self-supervised Image Restoration and multimodal Image Fusion joint learning network (SIRIFN), which enhances the robustness of the fusion network in adverse weather conditions by employing deep supervision from a guiding network to the learning network. Furthermore, to enhance the network’s information extraction and integration capabilities, our Multi-level Feature Collaborative adaptive Reconstruction Network (MFCRNet) is introduced, which adopts a multi-branch, multi-scale design, with differentiated processing strategies for different features. This approach preserves rich texture information while maintaining semantic consistency from the source images. Extensive experiments demonstrate that SIRIFN outperforms current state-of-the-art algorithms in both visual quality and quantitative evaluation. Specifically, the joint implementation of image restoration and multimodal fusion provides more effective information for visual tasks under extreme weather conditions, thereby facilitating downstream visual tasks.
知识嵌入式多层协作自适应融合网络:应对多雾条件和复杂成像的挑战
红外和可见光图像融合旨在生成高质量的图像,以满足人类和机器在极端成像条件下的视觉感知。然而,目前的融合方法主要依赖于在晴朗天气条件下拍摄的红外和可见光图像数据集。当应用到实际场景时,图像融合任务不可避免地会遇到大雾等恶劣天气条件带来的挑战,导致难以获得有效信息和视觉感知能力下降。为了应对这些挑战,本文提出了一种基于平均值教师的自监督图像复原和多模态图像融合联合学习网络(SIRIFN),该网络通过从指导网络到学习网络的深度监督,增强了融合网络在恶劣天气条件下的鲁棒性。此外,为了增强网络的信息提取和整合能力,我们引入了多层次特征协作自适应重构网络(MFCRNet),该网络采用多分支、多尺度设计,针对不同特征采用不同的处理策略。这种方法既能保留丰富的纹理信息,又能保持源图像的语义一致性。大量实验证明,SIRIFN 在视觉质量和定量评估方面都优于目前最先进的算法。具体来说,图像复原和多模态融合的联合实施为极端天气条件下的视觉任务提供了更有效的信息,从而为下游视觉任务提供了便利。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.50
自引率
8.70%
发文量
656
审稿时长
29 days
期刊介绍: In 2022 the Journal of King Saud University - Computer and Information Sciences will become an author paid open access journal. Authors who submit their manuscript after October 31st 2021 will be asked to pay an Article Processing Charge (APC) after acceptance of their paper to make their work immediately, permanently, and freely accessible to all. The Journal of King Saud University Computer and Information Sciences is a refereed, international journal that covers all aspects of both foundations of computer and its practical applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信