基于机器学习的深度上下文视频压缩

M. A. Chubar, M. Gashnikov
{"title":"基于机器学习的深度上下文视频压缩","authors":"M. A. Chubar, M. Gashnikov","doi":"10.1109/ITNT57377.2023.10139047","DOIUrl":null,"url":null,"abstract":"The structure of existing neural network video compression methods in most cases includes predictive encoding, which uses a subtraction operation between the predicted and current frames to remove redundancy. To increase efficiency, an approach based on deep contextual video compression is used. In addition to the difference frame, this approach relies heavily on specialized algorithms for extracting additional information characterizing the difference of closely spaced frames. The use of context in this case makes it possible to achieve a better quality of reconstruction of video sequences, in particular for complex textures with a large number of high frequencies. This implies that the proposed method can potentially lead to significant savings in storage and transmission costs while maintaining high-quality video output. This article presents the results of computational experiments to evaluate the effectiveness of the investigated method of deep contextual video compression on real video sequences. Experimental findings demonstrate the advantages of the considered technique in PSNRR/bpp coordinates when compared to the performance of three common video codecs: H.264, H.265, and VP9.","PeriodicalId":296438,"journal":{"name":"2023 IX International Conference on Information Technology and Nanotechnology (ITNT)","volume":"111 3S 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Contextual Video Compression Based on Machine Learning\",\"authors\":\"M. A. Chubar, M. Gashnikov\",\"doi\":\"10.1109/ITNT57377.2023.10139047\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The structure of existing neural network video compression methods in most cases includes predictive encoding, which uses a subtraction operation between the predicted and current frames to remove redundancy. To increase efficiency, an approach based on deep contextual video compression is used. In addition to the difference frame, this approach relies heavily on specialized algorithms for extracting additional information characterizing the difference of closely spaced frames. The use of context in this case makes it possible to achieve a better quality of reconstruction of video sequences, in particular for complex textures with a large number of high frequencies. This implies that the proposed method can potentially lead to significant savings in storage and transmission costs while maintaining high-quality video output. This article presents the results of computational experiments to evaluate the effectiveness of the investigated method of deep contextual video compression on real video sequences. Experimental findings demonstrate the advantages of the considered technique in PSNRR/bpp coordinates when compared to the performance of three common video codecs: H.264, H.265, and VP9.\",\"PeriodicalId\":296438,\"journal\":{\"name\":\"2023 IX International Conference on Information Technology and Nanotechnology (ITNT)\",\"volume\":\"111 3S 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IX International Conference on Information Technology and Nanotechnology (ITNT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITNT57377.2023.10139047\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IX International Conference on Information Technology and Nanotechnology (ITNT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITNT57377.2023.10139047","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

现有的神经网络视频压缩方法的结构在大多数情况下都包括预测编码,它使用预测帧和当前帧之间的减法运算来消除冗余。为了提高效率,采用了一种基于深度上下文视频压缩的方法。除了差分帧之外,该方法还严重依赖于专门的算法来提取表征紧密间隔帧差异的附加信息。在这种情况下,上下文的使用可以实现视频序列的更好质量的重建,特别是对于具有大量高频的复杂纹理。这意味着所提出的方法可以在保持高质量视频输出的同时显著节省存储和传输成本。本文给出了计算实验结果,以评估所研究的深度上下文视频压缩方法对真实视频序列的有效性。实验结果表明,与三种常见的视频编解码器(H.264、H.265和VP9)的性能相比,所考虑的技术在PSNRR/bpp坐标下具有优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Contextual Video Compression Based on Machine Learning
The structure of existing neural network video compression methods in most cases includes predictive encoding, which uses a subtraction operation between the predicted and current frames to remove redundancy. To increase efficiency, an approach based on deep contextual video compression is used. In addition to the difference frame, this approach relies heavily on specialized algorithms for extracting additional information characterizing the difference of closely spaced frames. The use of context in this case makes it possible to achieve a better quality of reconstruction of video sequences, in particular for complex textures with a large number of high frequencies. This implies that the proposed method can potentially lead to significant savings in storage and transmission costs while maintaining high-quality video output. This article presents the results of computational experiments to evaluate the effectiveness of the investigated method of deep contextual video compression on real video sequences. Experimental findings demonstrate the advantages of the considered technique in PSNRR/bpp coordinates when compared to the performance of three common video codecs: H.264, H.265, and VP9.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信