Degradation-Guided cross-consistent deep unfolding network for video restoration under diverse weathers

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yuanshuo Cheng , Mingwen Shao , Yecong Wan , Yuanjian Qiao , Wangmeng Zuo , Deyu Meng
{"title":"Degradation-Guided cross-consistent deep unfolding network for video restoration under diverse weathers","authors":"Yuanshuo Cheng ,&nbsp;Mingwen Shao ,&nbsp;Yecong Wan ,&nbsp;Yuanjian Qiao ,&nbsp;Wangmeng Zuo ,&nbsp;Deyu Meng","doi":"10.1016/j.neunet.2025.107700","DOIUrl":null,"url":null,"abstract":"<div><div>Existing video restoration (VR) methods have made promising progress in improving the quality of videos degraded by adverse weather. However, these approaches only restore videos with one specific type of degradation and ignore the diversity of degradations in the real world, which limits their application in realistic scenes with diverse adverse weathers. To address the aforementioned issue, in this paper, we propose a Cross-consistent Deep Unfolding Network (CDUN) to adaptively restore frames corrupted by different degradations via the guidance of degradation features. Specifically, the proposed CDUN incorporates (1) a flexible iterative optimization framework, capable of restoring frames corrupted by arbitrary degradations according to the corresponding degradation features given in advance. To enable the framework to eliminate diverse degradations, we devise (2) a Sequence-wise Adaptive Degradation Estimator (SADE) to estimate degradation features for the corrupted video. By orchestrating these two cascading procedures, the proposed CDUN is capable of an end-to-end restoration of videos under the diverse-degradation scene. In addition, we propose a window-based inter-frame fusion strategy to utilize information from more adjacent frames. This strategy involves progressive stacking of temporal windows in multiple iterations, effectively enlarging the temporal receptive field and enabling each frame’s restoration to leverage information from distant frames. This work establishes the first explicit model for diverse-degraded videos and is one of the earliest studies of video restoration in the diverse-degradation scene. Extensive experiments indicate that our method achieves state-of-the-art.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"191 ","pages":"Article 107700"},"PeriodicalIF":6.0000,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025005805","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Existing video restoration (VR) methods have made promising progress in improving the quality of videos degraded by adverse weather. However, these approaches only restore videos with one specific type of degradation and ignore the diversity of degradations in the real world, which limits their application in realistic scenes with diverse adverse weathers. To address the aforementioned issue, in this paper, we propose a Cross-consistent Deep Unfolding Network (CDUN) to adaptively restore frames corrupted by different degradations via the guidance of degradation features. Specifically, the proposed CDUN incorporates (1) a flexible iterative optimization framework, capable of restoring frames corrupted by arbitrary degradations according to the corresponding degradation features given in advance. To enable the framework to eliminate diverse degradations, we devise (2) a Sequence-wise Adaptive Degradation Estimator (SADE) to estimate degradation features for the corrupted video. By orchestrating these two cascading procedures, the proposed CDUN is capable of an end-to-end restoration of videos under the diverse-degradation scene. In addition, we propose a window-based inter-frame fusion strategy to utilize information from more adjacent frames. This strategy involves progressive stacking of temporal windows in multiple iterations, effectively enlarging the temporal receptive field and enabling each frame’s restoration to leverage information from distant frames. This work establishes the first explicit model for diverse-degraded videos and is one of the earliest studies of video restoration in the diverse-degradation scene. Extensive experiments indicate that our method achieves state-of-the-art.
退化引导交叉一致深度展开网络在不同天气条件下的视频恢复
现有的视频恢复(VR)方法在改善恶劣天气导致的视频质量方面取得了可喜的进展。然而,这些方法只恢复具有特定退化类型的视频,而忽略了现实世界中退化的多样性,这限制了它们在具有多种恶劣天气的现实场景中的应用。为了解决上述问题,在本文中,我们提出了一种交叉一致深度展开网络(CDUN),通过退化特征的指导自适应地恢复被不同退化损坏的帧。具体而言,本文提出的CDUN包含了(1)一个灵活的迭代优化框架,能够根据事先给定的相应退化特征恢复任意退化损坏的帧。为了使框架能够消除各种退化,我们设计了(2)一个序列自适应退化估计器(SADE)来估计损坏视频的退化特征。通过编排这两个级联过程,所提出的CDUN能够在不同退化场景下实现视频的端到端恢复。此外,我们提出了一种基于窗口的帧间融合策略,以利用更多相邻帧的信息。该策略包括在多个迭代中逐步叠加时间窗口,有效地扩大了时间接受野,并使每帧的恢复能够利用来自远帧的信息。该工作建立了第一个不同退化视频的显式模型,是最早研究不同退化场景下视频恢复的研究之一。大量的实验表明我们的方法达到了最先进的水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信