Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Xing Zeng, Zhengang Li, Ning Wang
{"title":"环内滤波器的密集初始注意神经网络","authors":"Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Xing Zeng, Zhengang Li, Ning Wang","doi":"10.1109/PCS48520.2019.8954499","DOIUrl":null,"url":null,"abstract":"Recently, deep learning technology has made significant progresses in high efficiency video coding (HEVC), especially in in-loop filter. In this paper, we propose a dense inception attention network (DIA_Net) to delve into image information and model capacity. The DIA_Net contains multiple inception blocks which have different size kernels so as to dig out various scales information. Meanwhile, attention mechanism including spatial attention and channel attention is utilized to fully exploit feature information. Further we adopt a dense residual structure to deepen the network. We attach DIA_Net to the end of in-loop filter part in HEVC as a post-processor and apply it to luma components. The experimental results demonstrate the proposed DIA_Net has remarkable improvement over the standard HEVC. With all-intra(AI) and random access(RA) configurations, It achieves 8.2% bd-rate reduction in AI configuration and 5.6% bd-rate reduction in RA configuration.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Dense Inception Attention Neural Network for In-Loop Filter\",\"authors\":\"Xiaoyu Xu, Jian Qian, Li Yu, Hongkui Wang, Xing Zeng, Zhengang Li, Ning Wang\",\"doi\":\"10.1109/PCS48520.2019.8954499\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, deep learning technology has made significant progresses in high efficiency video coding (HEVC), especially in in-loop filter. In this paper, we propose a dense inception attention network (DIA_Net) to delve into image information and model capacity. The DIA_Net contains multiple inception blocks which have different size kernels so as to dig out various scales information. Meanwhile, attention mechanism including spatial attention and channel attention is utilized to fully exploit feature information. Further we adopt a dense residual structure to deepen the network. We attach DIA_Net to the end of in-loop filter part in HEVC as a post-processor and apply it to luma components. The experimental results demonstrate the proposed DIA_Net has remarkable improvement over the standard HEVC. With all-intra(AI) and random access(RA) configurations, It achieves 8.2% bd-rate reduction in AI configuration and 5.6% bd-rate reduction in RA configuration.\",\"PeriodicalId\":237809,\"journal\":{\"name\":\"2019 Picture Coding Symposium (PCS)\",\"volume\":\"80 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Picture Coding Symposium (PCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PCS48520.2019.8954499\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Picture Coding Symposium (PCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PCS48520.2019.8954499","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dense Inception Attention Neural Network for In-Loop Filter
Recently, deep learning technology has made significant progresses in high efficiency video coding (HEVC), especially in in-loop filter. In this paper, we propose a dense inception attention network (DIA_Net) to delve into image information and model capacity. The DIA_Net contains multiple inception blocks which have different size kernels so as to dig out various scales information. Meanwhile, attention mechanism including spatial attention and channel attention is utilized to fully exploit feature information. Further we adopt a dense residual structure to deepen the network. We attach DIA_Net to the end of in-loop filter part in HEVC as a post-processor and apply it to luma components. The experimental results demonstrate the proposed DIA_Net has remarkable improvement over the standard HEVC. With all-intra(AI) and random access(RA) configurations, It achieves 8.2% bd-rate reduction in AI configuration and 5.6% bd-rate reduction in RA configuration.