André F. R. Guarda, Nuno M. M. Rodrigues, F. Pereira
{"title":"Point Cloud Coding: Adopting a Deep Learning-based Approach","authors":"André F. R. Guarda, Nuno M. M. Rodrigues, F. Pereira","doi":"10.1109/PCS48520.2019.8954537","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954537","url":null,"abstract":"Point clouds have recently become an important visual representation format, especially for virtual and augmented reality applications, thus making point cloud coding a very hot research topic. Deep learning-based coding methods have recently emerged in the field of image coding with increasing success. These coding solutions take advantage of the ability of convolutional neural networks to extract adaptive features from the images to create a latent representation that can be efficiently coded. In this context, this paper extends the deep-learning coding approach to point cloud coding using an autoencoder network design. Performance results are very promising, showing improvements over the Point Cloud Library codec often taken as benchmark, thus suggesting a significant margin of evolution for this new point cloud coding paradigm.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134429057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive QP with Tile Partition and Padding to Remove Boundary Artifacts for 360 Video Coding","authors":"Yule Sun, Bin Wang, Lu Yu","doi":"10.1109/PCS48520.2019.8954549","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954549","url":null,"abstract":"To adapt to existing high-efficiency video coding standards or technologies, 360 video is represented by different projection formats with one or multiple planes. However, continuous spherical content will have discontinuous boundaries on representation planes, which results in obvious artifacts on viewing viewports rendered from coded projection formats. To solve this subjective issue, a boundary based adaptive QP method together with tile partition and padding is proposed in this paper. Tile partition and padding can effectively reduce obvious seam artifacts. Boundary areas are coded with better quality by using lower QP to further reduce visible artifacts. Experiments are conducted based on hybrid equi-angular cubemap projection (HEC). Experimental results show that the proposed method can significantly improve subjective quality by removing boundary artifacts for 360 video coding.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115122411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Post Sample Adaptive Offset for Video Coding","authors":"Wang-Q Lim, H. Schwarz, D. Marpe, T. Wiegand","doi":"10.1109/PCS48520.2019.8954544","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954544","url":null,"abstract":"In-loop filtering is an important task in video coding, as it refines both the reconstructed signal for display and the pictures used for inter-prediction. At the current stage of the Versatile Video Coding (VVC) standardization, there are three in-loop filtering procedures consisting of deblocking filter (DBF), sample adaptive offset (SAO) and adaptive loop filter (ALF). Among them, SAO is the simplest in-loop filtering process and highly effective in removing coding artifacts. It simply modifies decoded samples by conditionally adding an offset value to each sample after the application of the DBF. For this, a classification is applied for each sample location, which gives a partition of the set of all sample locations. After that, an offset value is added to all samples associated with each class. Therefore, the performance of SAO essentially relies on how its classification behaves. In this paper, we introduce a novel classification method for SAO. Based on this, we derive an additional SAO filtering process which we call post sample adaptive offset (PSAO). Experimental results show the effectiveness of our proposed PSAO filtering process. On average, 0.42%, 0.31% and 0.33% additional coding gains can be achieved on top of VTM-5.0 for all intra (AI), random access (RA) and low delay with B pictures (LB) configurations, respectively.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122483318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shinobu Kudo, Shota Orihashi, Ryuichi Tanida, A. Shimizu
{"title":"GAN-based Image Compression Using Mutual Information Maximizing Regularization","authors":"Shinobu Kudo, Shota Orihashi, Ryuichi Tanida, A. Shimizu","doi":"10.1109/PCS48520.2019.8954548","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954548","url":null,"abstract":"Recently, image compression systems based on convolutional neural networks that use flexible nonlinear analysis and synthesis transformations have been developed to improve the restoration accuracy of decoded images. A method using a framework called a generative adversarial network [1] has been reported as one of the methods aiming to improve the subjective image quality [2][3]. It optimizes the distribution of restored images to be close to that of natural images; thus it suppresses visual artifacts such as blurring, ringing, and blocking. However, since methods of this type are optimized to focus on whether the restored image is subjectively natural or not, components that are not correlated with the original image are mixed in the coding features obtained from the encoder. Thus, even though the appearance looks natural, it may be subjectively seen as a different object from the original image or the impression may be changed.In this paper, we describe a method we have developed to maximize mutual information between the coding features and the restored images. This method, which we call \"regularization\", makes it possible to develop image compression systems that suppress appearance differences with subjective naturalness.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123931072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A 3D Haar Wavelet Transform for Point Cloud Attribute Compression Based on Local Surface Analysis","authors":"Sujun Zhang, Wei Zhang, Fuzheng Yang, Junyan Huo","doi":"10.1109/PCS48520.2019.8954557","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954557","url":null,"abstract":"Point cloud is a main representation of 3D scenes. It is widely applied in many fields including autonomous driving, heritage reconstruction, virtual reality and augmented reality. The data size of this type of media is massive since it contains numerous points with each associated with a large amount of information including geometric coordinate, color, reflectance, and normal. It is thus of great significance to investigate the compression of point cloud data to boost its application. However, developing efficient point cloud compression method is challenging mainly due to the unstructured nature and nonuniform distribution of the data. In this paper, we propose a novel point cloud attribute compression algorithm based on Haar Wavelet Transform (HWT). More specifically, the transform is performed taking into account the surface orientation of point cloud. Experimental results demonstrate that the proposed method outperforms other state-of-the-art transforms.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125676237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extending Video Decoding Energy Models for 360° and HDR Video Formats in HEVC","authors":"Matthias Kränzler, Christian Herglotz, A. Kaup","doi":"10.1109/PCS48520.2019.8954563","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954563","url":null,"abstract":"Research has shown that decoder energy models are helpful tools for improving the energy efficiency in video playback applications. For example, an accurate feature-based bit stream model can reduce the energy consumption of the decoding process. However, until now only sequences of the SDR video format were investigated. Therefore, this paper shows that the decoding energy of HEVC-coded bit streams can be estimated precisely for different video formats and coding bit depths. Therefore, we compare a state-of-the-art model from the literature with a proposed model. We show that bit streams of the 360◦, HDR, and fisheye video format can be estimated with a mean estimation error lower than 3.88% if the setups have the same coding bit depth. Furthermore, it is shown that on average, the energy demand for the decoding of bit streams with a bit depth of 10-bit is 55% higher than with 8-bit.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114156704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Winken, Christian Bartnik, H. Schwarz, D. Marpe, T. Wiegand
{"title":"Weighted Multi-Hypothesis Inter Prediction for Video Coding","authors":"Martin Winken, Christian Bartnik, H. Schwarz, D. Marpe, T. Wiegand","doi":"10.1109/PCS48520.2019.8954505","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954505","url":null,"abstract":"A key component of state-of-the art video coding is motion-compensated prediction, also called inter prediction. Current standards allow uni- and bi-prediction, i.e. linear superposition of up to two motion-compensated prediction signals. It is well-known that by a superposition of more than two prediction signals (or hypotheses), the energy of the prediction error can be further reduced. In this paper, it is shown that allowing the encoder to choose among different weights for the individual hypotheses is beneficial from a rate-distortion perspective. A practical multi-hypothesis inter prediction scheme based on the Versatile Video Coding Test Model (VTM) is presented. For VTM-1, in the Random Access configuration according to the JVET Common Test Conditions, the average luma BD bit rate is in the range of -1.6 % to -1.9 % for different settings using up to four prediction hypotheses. For VTM-2, the corresponding BD bit rate is -0.95 %. For higher bit rates (i.e., QP values 12, 17, 22, 27) the BD bit rates are -2.2 % for VTM-1 and -1.4 % for VTM-2.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126908631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low Pixel Rate 3DoF+ Video Compression Via Unpredictable Region Cropping","authors":"Bin Wang, Yule Sun, Lu Yu","doi":"10.1109/PCS48520.2019.8954501","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954501","url":null,"abstract":"Enhanced three degrees of freedom (3DoF+) video system enables both rotational and translational movements for viewers within a limited scene. It introduces interactive motion parallax that provides viewers with a more natural and immersive visual experience on head-mounted displays (HMDs). A large set of views are required to support 3DoF+ video system, hence huge data costs a tremendous amount of computation and bandwidth. In this paper, a method based on unpredictable region cropping is proposed to reduce the size of data. The proposed method can be separated into two steps: basic view selection and sub-image cropping. One or several views in the source view set are selected as basic views adaptively. The goal of the basic views is to predict the remaining views, and unpredictable regions are cropped to multiple rectangular sub-images. The basic views and the cropped sub-images, called to-be-coded views, are coded by using High Efficiency Video Coding (HEVC) standard. Experimental results show the proposed method can achieve up to 75.0% pixel-rate saving and improve the quality of rendered views by 1.5dB. The method of basic view selection was adopted by the moving picture experts group (MPEG) video subgroup as a tool for the first version of Test Model of Immersive Video (TMIV).","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132073106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Content-gnostic Bitrate Ladder Prediction for Adaptive Video Streaming","authors":"Angeliki V. Katsenou, J. Solé, D. Bull","doi":"10.1109/PCS48520.2019.8954529","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954529","url":null,"abstract":"A challenge that many video providers face is the heterogeneity of networks and display devices for streaming, as well as dealing with a wide variety of content with different encoding performance. In the past, a fixed bit rate ladder solution based on a \"fitting all\" approach has been employed. However, such a content-tailored solution is highly demanding; the computational and financial cost of constructing the convex hull per video by encoding at all resolutions and quantization levels is huge. In this paper, we propose a content-gnostic approach that exploits machine learning to predict the bit rate ranges for different resolutions. This has the advantage of significantly reducing the number of encodes required. The first results, based on over 100 HEVC-encoded sequences demonstrate the potential, showing an average Bjøntegaard Delta Rate (BDRate) loss of 0.51% and an average BDPSNR loss of 0.01 dB compared to the ground truth, while significantly reducing the number of pre-encodes required when compared to two other methods (by 81%-94%).","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132788103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tung Nguyen, B. Bross, Paul Keydel, H. Schwarz, D. Marpe, T. Wiegand
{"title":"Extended Transform Skip Mode and Fast Multiple Transform Set Selection in VVC","authors":"Tung Nguyen, B. Bross, Paul Keydel, H. Schwarz, D. Marpe, T. Wiegand","doi":"10.1109/PCS48520.2019.8954540","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954540","url":null,"abstract":"The Versatile Video Coding (VVC) development has adopted the possibility to bypass the transform when the transform block size is equal to 4×4 from its predecessor High Efficiency Video Coding (HEVC). This so-called Transform Skip Mode (TSM) results in increased encoding time when extending it to transform block sizes up to 32×32 while the compression efficiency improvement is for screen content only. This paper presents the so-called Unified MTS scheme that makes TSM for luma transform block sizes up to 32×32 possible without the disadvantage of excessive encoding time increase by incorporating the TSM with the existing Multiple Transform Set (MTS) technique. The Unified MTS scheme achieves compression efficiency improvements, in terms of BD-rate, of about −5.0% in the All-Intra configuration and −5.4% in the Random-Access configuration, respectively, for the screen content sequences of the used test set. Compared to the straightforward extension of TSM to transform block sizes up to 32×32, the encoding time is about 25% less in the All-Intra configuration and about 21% less in the Random-Access configuration, respectively, whereas the compression efficiency improvements are only 0.04% less in the All-Intra configuration, and 0.09% less in the Random-Access configuration, respectively. Relative to the anchor using TSM for 4×4 transform blocks only, the encoding time is the same for natural content and 3% higher for screen content.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121234759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}