Dense Inception Attention Neural Network for In-Loop Filter

Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Xing Zeng, Zhengang Li, Ning Wang
{"title":"Dense Inception Attention Neural Network for In-Loop Filter","authors":"Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Xing Zeng, Zhengang Li, Ning Wang","doi":"10.1109/PCS48520.2019.8954499","DOIUrl":null,"url":null,"abstract":"Recently, deep learning technology has made significant progresses in high efficiency video coding (HEVC), especially in in-loop filter. In this paper, we propose a dense inception attention network (DIA_Net) to delve into image information and model capacity. The DIA_Net contains multiple inception blocks which have different size kernels so as to dig out various scales information. Meanwhile, attention mechanism including spatial attention and channel attention is utilized to fully exploit feature information. Further we adopt a dense residual structure to deepen the network. We attach DIA_Net to the end of in-loop filter part in HEVC as a post-processor and apply it to luma components. The experimental results demonstrate the proposed DIA_Net has remarkable improvement over the standard HEVC. With all-intra(AI) and random access(RA) configurations, It achieves 8.2% bd-rate reduction in AI configuration and 5.6% bd-rate reduction in RA configuration.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Picture Coding Symposium (PCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PCS48520.2019.8954499","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Recently, deep learning technology has made significant progresses in high efficiency video coding (HEVC), especially in in-loop filter. In this paper, we propose a dense inception attention network (DIA_Net) to delve into image information and model capacity. The DIA_Net contains multiple inception blocks which have different size kernels so as to dig out various scales information. Meanwhile, attention mechanism including spatial attention and channel attention is utilized to fully exploit feature information. Further we adopt a dense residual structure to deepen the network. We attach DIA_Net to the end of in-loop filter part in HEVC as a post-processor and apply it to luma components. The experimental results demonstrate the proposed DIA_Net has remarkable improvement over the standard HEVC. With all-intra(AI) and random access(RA) configurations, It achieves 8.2% bd-rate reduction in AI configuration and 5.6% bd-rate reduction in RA configuration.
环内滤波器的密集初始注意神经网络
近年来,深度学习技术在高效视频编码(HEVC)特别是环内滤波方面取得了重大进展。在本文中,我们提出了一个密集初始关注网络(DIA_Net)来深入研究图像信息和模型容量。DIA_Net包含多个初始块,这些初始块具有不同大小的内核,从而挖掘出不同尺度的信息。同时,利用空间注意和通道注意等注意机制,充分挖掘特征信息。进一步采用密集残差结构加深网络。我们将DIA_Net附加到HEVC中的环内滤波器部分的末端作为后处理器,并将其应用于亮度组件。实验结果表明,所提出的DIA_Net比标准HEVC有显著的改进。在AI (all-intra)和RA (random access)配置下,AI配置下的bd速率降低了8.2%,RA配置下的bd速率降低了5.6%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信