{"title":"A temporally-aware noise-informed invertible network for progressive video denoising","authors":"Yan Huang , Huixin Luo , Yong Xu , Xian-Bing Meng","doi":"10.1016/j.imavis.2024.105369","DOIUrl":null,"url":null,"abstract":"<div><div>Video denoising is a critical task in computer vision, aiming to enhance video quality by removing noise from consecutive video frames. Despite significant progress, existing video denoising methods still suffer from challenges in maintaining temporal consistency and adapting to different noise levels. To address these issues, a temporally-aware and noise-informed invertible network is proposed by following divide-and-conquer principle for progressive video denoising. Specifically, a recurrent attention-based reversible network is designed to distinctly extract temporal information from consecutive frames, thus tackling the learning problem of temporal consistency. Simultaneously, a noise-informed two-way dense block is developed by using estimated noise as conditional guidance to adapt to different noise levels. The noise-informed guidance can then be used to guide the learning of dense block for efficient video denoising. Under the framework of invertible network, the designed two parts can be further integrated to achieve invertible learning to enable progressive video denoising. Experiments and comparative studies demonstrate that our method can achieve good denoising accuracy and fast inference speed in both synthetic scenes and real-world applications.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105369"},"PeriodicalIF":4.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624004748","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Video denoising is a critical task in computer vision, aiming to enhance video quality by removing noise from consecutive video frames. Despite significant progress, existing video denoising methods still suffer from challenges in maintaining temporal consistency and adapting to different noise levels. To address these issues, a temporally-aware and noise-informed invertible network is proposed by following divide-and-conquer principle for progressive video denoising. Specifically, a recurrent attention-based reversible network is designed to distinctly extract temporal information from consecutive frames, thus tackling the learning problem of temporal consistency. Simultaneously, a noise-informed two-way dense block is developed by using estimated noise as conditional guidance to adapt to different noise levels. The noise-informed guidance can then be used to guide the learning of dense block for efficient video denoising. Under the framework of invertible network, the designed two parts can be further integrated to achieve invertible learning to enable progressive video denoising. Experiments and comparative studies demonstrate that our method can achieve good denoising accuracy and fast inference speed in both synthetic scenes and real-world applications.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.