DCAN:视频超分辨率深度连续注意网络

Talha Saleem, Sovann Chen, S. Aramvith
{"title":"DCAN:视频超分辨率深度连续注意网络","authors":"Talha Saleem, Sovann Chen, S. Aramvith","doi":"10.23919/APSIPAASC55919.2022.9979823","DOIUrl":null,"url":null,"abstract":"Slow motion is visually attractive in video applications and gets more attention in video super-resolution (VSR). To generate the high-resolution (HR) center frame with its neighbor HR frames from the low-resolution (LR) of two frames. Two sub-tasks are required, including video super-resolution (VSR) and video frame interpolation (VFI). However, the interpolation approach does not successfully extract low-level features to achieve the acceptable result of space-time video super-resolution. Therefore, the restoration performance of existing systems is constrained due to rarely considering the spatial-temporal correlation and the long-term temporal context concurrently. To this extent, we propose a deep consecutive attention network-based method to generate attentive features to get HR slow-motion frames. A channel attention module and an attentive temporal feature module are designed to improve the perceptual quality of predicted interpolation feature frames. The experimental results show the proposed method outperforms 0.17 dB in an average PSNR compared to the state-of-the-art baseline method.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DCAN: Deep Consecutive Attention Network for Video Super Resolution\",\"authors\":\"Talha Saleem, Sovann Chen, S. Aramvith\",\"doi\":\"10.23919/APSIPAASC55919.2022.9979823\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Slow motion is visually attractive in video applications and gets more attention in video super-resolution (VSR). To generate the high-resolution (HR) center frame with its neighbor HR frames from the low-resolution (LR) of two frames. Two sub-tasks are required, including video super-resolution (VSR) and video frame interpolation (VFI). However, the interpolation approach does not successfully extract low-level features to achieve the acceptable result of space-time video super-resolution. Therefore, the restoration performance of existing systems is constrained due to rarely considering the spatial-temporal correlation and the long-term temporal context concurrently. To this extent, we propose a deep consecutive attention network-based method to generate attentive features to get HR slow-motion frames. A channel attention module and an attentive temporal feature module are designed to improve the perceptual quality of predicted interpolation feature frames. The experimental results show the proposed method outperforms 0.17 dB in an average PSNR compared to the state-of-the-art baseline method.\",\"PeriodicalId\":382967,\"journal\":{\"name\":\"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/APSIPAASC55919.2022.9979823\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/APSIPAASC55919.2022.9979823","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

慢动作在视频应用中具有视觉吸引力,在视频超分辨率(VSR)中受到越来越多的关注。从两帧的低分辨率(LR)帧中生成高分辨率(HR)中心帧及其相邻的高分辨率(HR)帧。其中包括视频超分辨率(VSR)和视频帧插值(VFI)两个子任务。然而,该插值方法无法成功提取底层特征,无法获得可接受的时空视频超分辨率结果。因此,由于很少同时考虑时空相关性和长期时间背景,现有系统的恢复性能受到限制。为此,我们提出了一种基于深度连续注意网络的方法来生成注意特征以获取HR慢动作帧。为了提高预测插值特征帧的感知质量,设计了信道注意模块和注意时间特征模块。实验结果表明,与最先进的基线方法相比,该方法的平均PSNR提高了0.17 dB。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DCAN: Deep Consecutive Attention Network for Video Super Resolution
Slow motion is visually attractive in video applications and gets more attention in video super-resolution (VSR). To generate the high-resolution (HR) center frame with its neighbor HR frames from the low-resolution (LR) of two frames. Two sub-tasks are required, including video super-resolution (VSR) and video frame interpolation (VFI). However, the interpolation approach does not successfully extract low-level features to achieve the acceptable result of space-time video super-resolution. Therefore, the restoration performance of existing systems is constrained due to rarely considering the spatial-temporal correlation and the long-term temporal context concurrently. To this extent, we propose a deep consecutive attention network-based method to generate attentive features to get HR slow-motion frames. A channel attention module and an attentive temporal feature module are designed to improve the perceptual quality of predicted interpolation feature frames. The experimental results show the proposed method outperforms 0.17 dB in an average PSNR compared to the state-of-the-art baseline method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信