RepAr-Net: Re-Parameterized Encoders and Attentive Feature Arsenals for Fast Video Denoising

S. Sharan, Adithya K. Krishna, A. S. Rao, V. Gopi
{"title":"RepAr-Net: Re-Parameterized Encoders and Attentive Feature Arsenals for Fast Video Denoising","authors":"S. Sharan, Adithya K. Krishna, A. S. Rao, V. Gopi","doi":"10.1109/icra46639.2022.9812394","DOIUrl":null,"url":null,"abstract":"Real-time video denoising finds applications in several fields like mobile robotics, satellite television, and surveillance systems. Traditional denoising approaches are more common in such systems than their deep learning-based counterparts despite their inferior performance. The large size and heavy computational requirements of neural network-based denoising models pose a serious impediment to their deployment in real-time applications. In this paper, we propose RepAr-Net, a simple yet efficient architecture for fast video de noising. We propose to use temporally separable encoders to generate feature maps called arsenals that can be cached for reuse. We also incorporate re-parameterizable blocks that improve the representative power of the network without affecting the run-time. We benchmark our model on the Set-8 and 2017 DAVIS-Test datasets. Our model achieves state-of-the-art results with up to 29.62% improvement in PSNR and a 50% decrease in run times over existing methods. Our codes are open-sourced at: github.com/spider-tronix/RepAr-Net.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icra46639.2022.9812394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Real-time video denoising finds applications in several fields like mobile robotics, satellite television, and surveillance systems. Traditional denoising approaches are more common in such systems than their deep learning-based counterparts despite their inferior performance. The large size and heavy computational requirements of neural network-based denoising models pose a serious impediment to their deployment in real-time applications. In this paper, we propose RepAr-Net, a simple yet efficient architecture for fast video de noising. We propose to use temporally separable encoders to generate feature maps called arsenals that can be cached for reuse. We also incorporate re-parameterizable blocks that improve the representative power of the network without affecting the run-time. We benchmark our model on the Set-8 and 2017 DAVIS-Test datasets. Our model achieves state-of-the-art results with up to 29.62% improvement in PSNR and a 50% decrease in run times over existing methods. Our codes are open-sourced at: github.com/spider-tronix/RepAr-Net.
re - net:用于快速视频去噪的重新参数化编码器和关注特征库
实时视频去噪在移动机器人、卫星电视和监控系统等多个领域都有应用。传统的去噪方法在这类系统中比基于深度学习的去噪方法更常见,尽管它们的性能较差。基于神经网络的去噪模型体积大、计算量大,严重阻碍了其在实时应用中的部署。在本文中,我们提出了一种简单而有效的快速视频去噪架构——parenet。我们建议使用暂时可分离的编码器来生成称为库的特征图,可以缓存以供重用。我们还结合了可重新参数化的块,在不影响运行时的情况下提高了网络的代表能力。我们在Set-8和2017 DAVIS-Test数据集上对我们的模型进行基准测试。我们的模型取得了最先进的结果,与现有方法相比,PSNR提高了29.62%,运行时间减少了50%。我们的代码是开源的:github.com/spider-tronix/RepAr-Net。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信