Enhanced Video Super-Resolution Network Towards Compressed Data

IF 5.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Feng Li, Yixuan Wu, Anqi Li, Huihui Bai, Runmin Cong, Yao Zhao
{"title":"Enhanced Video Super-Resolution Network Towards Compressed Data","authors":"Feng Li, Yixuan Wu, Anqi Li, Huihui Bai, Runmin Cong, Yao Zhao","doi":"10.1145/3651309","DOIUrl":null,"url":null,"abstract":"<p>Video super-resolution (VSR) algorithms aim at recovering a temporally consistent high-resolution (HR) video from its corresponding low-resolution (LR) video sequence. Due to the limited bandwidth during video transmission, most available videos on the internet are compressed. Nevertheless, few existing algorithms consider the compression factor in practical applications. In this paper, we propose an enhanced VSR model towards compressed videos, termed as ECVSR, to simultaneously achieve compression artifacts reduction and SR reconstruction end-to-end. ECVSR contains a motion-excited temporal adaption network (METAN) and a multi-frame SR network (SRNet). The METAN takes decoded LR video frames as input and models inter-frame correlations via bidirectional deformable alignment and motion-excited temporal adaption, where temporal differences are calculated as motion prior to excite the motion-sensitive regions of temporal features. In SRNet, cascaded recurrent multi-scale blocks (RMSB) are employed to learn deep spatio-temporal representations from adapted multi-frame features. Then, we build a reconstruction module for spatio-temporal information integration and HR frame reconstruction, which is followed by a detail refinement module for texture and visual quality enhancement. Extensive experimental results on compressed videos demonstrate the superiority of our method for compressed VSR. Code will be available at https://github.com/lifengcs/ECVSR.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":"75 1","pages":""},"PeriodicalIF":5.2000,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Multimedia Computing Communications and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3651309","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Video super-resolution (VSR) algorithms aim at recovering a temporally consistent high-resolution (HR) video from its corresponding low-resolution (LR) video sequence. Due to the limited bandwidth during video transmission, most available videos on the internet are compressed. Nevertheless, few existing algorithms consider the compression factor in practical applications. In this paper, we propose an enhanced VSR model towards compressed videos, termed as ECVSR, to simultaneously achieve compression artifacts reduction and SR reconstruction end-to-end. ECVSR contains a motion-excited temporal adaption network (METAN) and a multi-frame SR network (SRNet). The METAN takes decoded LR video frames as input and models inter-frame correlations via bidirectional deformable alignment and motion-excited temporal adaption, where temporal differences are calculated as motion prior to excite the motion-sensitive regions of temporal features. In SRNet, cascaded recurrent multi-scale blocks (RMSB) are employed to learn deep spatio-temporal representations from adapted multi-frame features. Then, we build a reconstruction module for spatio-temporal information integration and HR frame reconstruction, which is followed by a detail refinement module for texture and visual quality enhancement. Extensive experimental results on compressed videos demonstrate the superiority of our method for compressed VSR. Code will be available at https://github.com/lifengcs/ECVSR.

面向压缩数据的增强型视频超分辨率网络
视频超分辨率(VSR)算法旨在从相应的低分辨率(LR)视频序列中恢复出时间上一致的高分辨率(HR)视频。由于视频传输过程中带宽有限,互联网上的大多数视频都经过了压缩。然而,现有算法很少考虑实际应用中的压缩因素。在本文中,我们提出了一种针对压缩视频的增强型 VSR 模型(称为 ECVSR),以同时实现压缩伪影的减少和端到端的 SR 重建。ECVSR 包含一个运动激发时间自适应网络(METAN)和一个多帧 SR 网络(SRNet)。METAN 将解码的 LR 视频帧作为输入,并通过双向可变形对齐和运动激发时序自适应建立帧间相关性模型,其中时序差异被计算为运动先验,以激发时序特征的运动敏感区域。在 SRNet 中,采用级联递归多尺度块(RMSB)从适应的多帧特征中学习深度时空表示。然后,我们建立了一个用于时空信息整合和 HR 帧重建的重构模块,接着是一个用于纹理和视觉质量增强的细节细化模块。压缩视频的大量实验结果证明了我们的方法在压缩 VSR 方面的优越性。代码将发布在 https://github.com/lifengcs/ECVSR 网站上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.50
自引率
5.90%
发文量
285
审稿时长
7.5 months
期刊介绍: The ACM Transactions on Multimedia Computing, Communications, and Applications is the flagship publication of the ACM Special Interest Group in Multimedia (SIGMM). It is soliciting paper submissions on all aspects of multimedia. Papers on single media (for instance, audio, video, animation) and their processing are also welcome. TOMM is a peer-reviewed, archival journal, available in both print form and digital form. The Journal is published quarterly; with roughly 7 23-page articles in each issue. In addition, all Special Issues are published online-only to ensure a timely publication. The transactions consists primarily of research papers. This is an archival journal and it is intended that the papers will have lasting importance and value over time. In general, papers whose primary focus is on particular multimedia products or the current state of the industry will not be included.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信