{"title":"基于机器学习的深度上下文视频压缩","authors":"M. A. Chubar, M. Gashnikov","doi":"10.1109/ITNT57377.2023.10139047","DOIUrl":null,"url":null,"abstract":"The structure of existing neural network video compression methods in most cases includes predictive encoding, which uses a subtraction operation between the predicted and current frames to remove redundancy. To increase efficiency, an approach based on deep contextual video compression is used. In addition to the difference frame, this approach relies heavily on specialized algorithms for extracting additional information characterizing the difference of closely spaced frames. The use of context in this case makes it possible to achieve a better quality of reconstruction of video sequences, in particular for complex textures with a large number of high frequencies. This implies that the proposed method can potentially lead to significant savings in storage and transmission costs while maintaining high-quality video output. This article presents the results of computational experiments to evaluate the effectiveness of the investigated method of deep contextual video compression on real video sequences. Experimental findings demonstrate the advantages of the considered technique in PSNRR/bpp coordinates when compared to the performance of three common video codecs: H.264, H.265, and VP9.","PeriodicalId":296438,"journal":{"name":"2023 IX International Conference on Information Technology and Nanotechnology (ITNT)","volume":"111 3S 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Contextual Video Compression Based on Machine Learning\",\"authors\":\"M. A. Chubar, M. Gashnikov\",\"doi\":\"10.1109/ITNT57377.2023.10139047\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The structure of existing neural network video compression methods in most cases includes predictive encoding, which uses a subtraction operation between the predicted and current frames to remove redundancy. To increase efficiency, an approach based on deep contextual video compression is used. In addition to the difference frame, this approach relies heavily on specialized algorithms for extracting additional information characterizing the difference of closely spaced frames. The use of context in this case makes it possible to achieve a better quality of reconstruction of video sequences, in particular for complex textures with a large number of high frequencies. This implies that the proposed method can potentially lead to significant savings in storage and transmission costs while maintaining high-quality video output. This article presents the results of computational experiments to evaluate the effectiveness of the investigated method of deep contextual video compression on real video sequences. Experimental findings demonstrate the advantages of the considered technique in PSNRR/bpp coordinates when compared to the performance of three common video codecs: H.264, H.265, and VP9.\",\"PeriodicalId\":296438,\"journal\":{\"name\":\"2023 IX International Conference on Information Technology and Nanotechnology (ITNT)\",\"volume\":\"111 3S 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IX International Conference on Information Technology and Nanotechnology (ITNT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITNT57377.2023.10139047\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IX International Conference on Information Technology and Nanotechnology (ITNT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITNT57377.2023.10139047","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Contextual Video Compression Based on Machine Learning
The structure of existing neural network video compression methods in most cases includes predictive encoding, which uses a subtraction operation between the predicted and current frames to remove redundancy. To increase efficiency, an approach based on deep contextual video compression is used. In addition to the difference frame, this approach relies heavily on specialized algorithms for extracting additional information characterizing the difference of closely spaced frames. The use of context in this case makes it possible to achieve a better quality of reconstruction of video sequences, in particular for complex textures with a large number of high frequencies. This implies that the proposed method can potentially lead to significant savings in storage and transmission costs while maintaining high-quality video output. This article presents the results of computational experiments to evaluate the effectiveness of the investigated method of deep contextual video compression on real video sequences. Experimental findings demonstrate the advantages of the considered technique in PSNRR/bpp coordinates when compared to the performance of three common video codecs: H.264, H.265, and VP9.