{"title":"Degradation-Guided cross-consistent deep unfolding network for video restoration under diverse weathers","authors":"Yuanshuo Cheng , Mingwen Shao , Yecong Wan , Yuanjian Qiao , Wangmeng Zuo , Deyu Meng","doi":"10.1016/j.neunet.2025.107700","DOIUrl":null,"url":null,"abstract":"<div><div>Existing video restoration (VR) methods have made promising progress in improving the quality of videos degraded by adverse weather. However, these approaches only restore videos with one specific type of degradation and ignore the diversity of degradations in the real world, which limits their application in realistic scenes with diverse adverse weathers. To address the aforementioned issue, in this paper, we propose a Cross-consistent Deep Unfolding Network (CDUN) to adaptively restore frames corrupted by different degradations via the guidance of degradation features. Specifically, the proposed CDUN incorporates (1) a flexible iterative optimization framework, capable of restoring frames corrupted by arbitrary degradations according to the corresponding degradation features given in advance. To enable the framework to eliminate diverse degradations, we devise (2) a Sequence-wise Adaptive Degradation Estimator (SADE) to estimate degradation features for the corrupted video. By orchestrating these two cascading procedures, the proposed CDUN is capable of an end-to-end restoration of videos under the diverse-degradation scene. In addition, we propose a window-based inter-frame fusion strategy to utilize information from more adjacent frames. This strategy involves progressive stacking of temporal windows in multiple iterations, effectively enlarging the temporal receptive field and enabling each frame’s restoration to leverage information from distant frames. This work establishes the first explicit model for diverse-degraded videos and is one of the earliest studies of video restoration in the diverse-degradation scene. Extensive experiments indicate that our method achieves state-of-the-art.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"191 ","pages":"Article 107700"},"PeriodicalIF":6.0000,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025005805","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Existing video restoration (VR) methods have made promising progress in improving the quality of videos degraded by adverse weather. However, these approaches only restore videos with one specific type of degradation and ignore the diversity of degradations in the real world, which limits their application in realistic scenes with diverse adverse weathers. To address the aforementioned issue, in this paper, we propose a Cross-consistent Deep Unfolding Network (CDUN) to adaptively restore frames corrupted by different degradations via the guidance of degradation features. Specifically, the proposed CDUN incorporates (1) a flexible iterative optimization framework, capable of restoring frames corrupted by arbitrary degradations according to the corresponding degradation features given in advance. To enable the framework to eliminate diverse degradations, we devise (2) a Sequence-wise Adaptive Degradation Estimator (SADE) to estimate degradation features for the corrupted video. By orchestrating these two cascading procedures, the proposed CDUN is capable of an end-to-end restoration of videos under the diverse-degradation scene. In addition, we propose a window-based inter-frame fusion strategy to utilize information from more adjacent frames. This strategy involves progressive stacking of temporal windows in multiple iterations, effectively enlarging the temporal receptive field and enabling each frame’s restoration to leverage information from distant frames. This work establishes the first explicit model for diverse-degraded videos and is one of the earliest studies of video restoration in the diverse-degradation scene. Extensive experiments indicate that our method achieves state-of-the-art.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.