{"title":"On the Efficiency of View Synthesis Prediction for 3D Video Coding","authors":"Yichen Zhang, Ngai-Man Cheung, Lu Yu","doi":"10.1109/DCC.2015.44","DOIUrl":"https://doi.org/10.1109/DCC.2015.44","url":null,"abstract":"We study the efficiency of view synthesis prediction (VSP). The proposed spectral domain analysis relates the power spectral density of the VSP error to the probability density function of the warping error. The analysis takes into account the warping error induced by (i) depth coding and (ii) disparity rounding at integer-pel, half-pel and quarter-pel warping accuracy. The interaction between VSP efficiency and interpolation filter is also studied. We validate our proposed model with empirical data. Using the proposed model, we discuss the interaction between prediction efficiency, depth image distortion, warping accuracy and interpolation filter. The proposed model provides theoretical insights of VSP in 3D-HEVC. It could be used to guide optimization of VSP designs.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128404230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compressing Yahoo Mail","authors":"Aran Bergman, Eyal Zohar","doi":"10.1109/DCC.2015.15","DOIUrl":"https://doi.org/10.1109/DCC.2015.15","url":null,"abstract":"Yahoo mail servers have been receiving an enormous number of messages each day for the past 17 years. The vast majority of today's messages are machine-generated (about 90% of the messages), based on a boilerplate with a small number of specific per-recipient changes. We show that the popular Zlib compression to gzip format fails to fully utilize the high similarity between these machine-generated messages. In this paper we analyze the data redundancy in Yahoo mail, and present methods to reduce its space requirements while using the standard Zlib library. Our results show we can further reduce the compressed data size by a factor of almost 2.5, compared to traditional gzip compression.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122875665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Aulí-Llinàs, P. Enfedaque, J. Moure, Ian Blanes, Victor Sanchez
{"title":"Strategy of Microscopic Parallelism for Bitplane Image Coding","authors":"F. Aulí-Llinàs, P. Enfedaque, J. Moure, Ian Blanes, Victor Sanchez","doi":"10.1109/DCC.2015.19","DOIUrl":"https://doi.org/10.1109/DCC.2015.19","url":null,"abstract":"Recent years have seen the upraising of a new type of processors strongly relying on the Single Instruction, Multiple Data (SIMD) architectural principle. The main idea behind SIMD computing is to apply a flow of instructions to multiple pieces of data in parallel and synchronously. This permits the execution of thousands of operations in parallel, achieving higher computational performance than with traditional Multiple Instruction, Multiple Data (MIMD) architectures. The level of parallelism required in SIMD computing can only be achieved in image coding systems via microscopic parallel strategies that code multiple coefficients in parallel. Until now, the only way to achieve microscopic parallelism in bit plane coding engines was by executing multiple coding passes in parallel. Such a strategy does not suit well SIMD computing because each thread executes different instructions. This paper introduces the first bit plane coding engine devised for the fine grain of parallelism required in SIMD computing. Its main insight is to allow parallel coefficient processing in a coding pass. Experimental tests show coding performance results similar to those of JPEG2000.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115455946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shampa Shahriyar, M. Murshed, Mortuza Ali, M. Paul
{"title":"Cuboid Coding of Depth Motion Vectors Using Binary Tree Based Decomposition","authors":"Shampa Shahriyar, M. Murshed, Mortuza Ali, M. Paul","doi":"10.1109/DCC.2015.43","DOIUrl":"https://doi.org/10.1109/DCC.2015.43","url":null,"abstract":"Motion vectors of depth-maps in multiview and free-viewpoint videos exhibit strong spatial as well as inter-component clustering tendency. This paper presents a novel motion vector coding technique that first compresses the multidimensional bitmaps of macro block mode information and then encodes only the non-zero components of motion vectors. The bitmaps are partitioned into disjoint cuboids using binary tree based decomposition so that the 0's and 1's are either highly polarized or further sub-partitioning is unlikely to achieve any compression. Each cuboid is entropy-coded as a unit using binary arithmetic coding. This technique is capable of exploiting the spatial and inter-component correlations efficiently without the restriction of scanning the bitmap in any specific linear order as needed by run-length coding. As encoding of non-zero component values no longer requires denoting the zero value, further compression efficiency is achieved. Experimental results on standard multiview test video sequences have comprehensively demonstrated the superiority of the proposed technique, achieving overall coding gain against the state-of-the-art in the range [17%,51%] and on average 31%.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124951425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid Image Compression by Using Vector Quantization (VQ) and Vector-Embedded Karhunen-Loève Transform (VEKLT)","authors":"Kiung Park","doi":"10.1109/DCC.2015.14","DOIUrl":"https://doi.org/10.1109/DCC.2015.14","url":null,"abstract":"In this paper, a new block-transform-based image compression scheme is proposed by combining vector quantization (VQ) and two transformations, discrete cosine transform (DCT) and vector-embedded Karhunen-Loève transform (VEKLT). First, 8×8 blocks from an input image are normalized and vector-quantized. Then, the difference between the original block and its vector-quantized block is transformed by VEKLT. In parallel, the original block is transformed by DCT. All blocks are classified into two categories (DCT and VEKLT) to minimize arithmetic code length. After that, quad tree decomposition is performed on the binary index image which indicates where a block belongs to one of the two categories. Experimental results show that the proposed scheme outperforms JPEG in peak signal-to-noise ratio (PSNR) and visual quality at high detail.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121882107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Geometric Compression of Orientation Signals for Fast Gesture Analysis","authors":"A. Sivakumar, Rushil Anirudh, P. Turaga","doi":"10.1109/DCC.2015.39","DOIUrl":"https://doi.org/10.1109/DCC.2015.39","url":null,"abstract":"This paper concerns itself with compression strategies for orientation signals, seen as signals evolving on the space of quaternion's. The compression techniques extend classical signal approximation strategies used in data mining, by explicitly taking into account the quotient-space properties of the quaternion space. The approximation techniques are applied to the case of human gesture recognition from cell phone-based orientation sensors. Results indicate that the proposed approach results in high recognition accuracies, with low storage requirements, with the geometric computations providing added robustness than classical vector-space computations.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122673780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiling Xu, Wei Huang, Wei Wang, Fanyi Duanmu, Zhan Ma
{"title":"2-D Index Map Coding for HEVC Screen Content Compression","authors":"Yiling Xu, Wei Huang, Wei Wang, Fanyi Duanmu, Zhan Ma","doi":"10.1109/DCC.2015.62","DOIUrl":"https://doi.org/10.1109/DCC.2015.62","url":null,"abstract":"This paper introduces a 2-D index map coding of the palette mode in screen content coding extension of the High-Efficiency Video Coding (HEVC SCC) standard to further improve the compression performance. In contrast to the current 1-D search using RUN to represent the length of matched string, we bring the block width and height to describe the arbitrary rectangle shape. We also use the block vector displacement to signal the matched block distance efficiently. By enlarging the search range from current coding tree unit (CTU) to a small neighbor CTU window (i.e., 3×5 CTUs), it provides the coding efficiency comparable to the case that full-frame intra block copy is used. It is more practical to use the local search window in real life considering the trade-off between the coding efficiency and implementation cost.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129718754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianle Chen, E. Alshina, Xiang Li, M. Karczewicz, A. Alshin
{"title":"Resampling Process of the Scalable High Efficiency Video Coding","authors":"Jianle Chen, E. Alshina, Xiang Li, M. Karczewicz, A. Alshin","doi":"10.1109/DCC.2015.60","DOIUrl":"https://doi.org/10.1109/DCC.2015.60","url":null,"abstract":"SHVC is the scalable extension of the latest video coding standard High Efficiency Video Coding (HEVC) and spatial resampling process is inevitable module to support spatial scalability. This paper describes in details the resampling process, including both texture and motion data resampling in SHVC, and using experimental evidence, demonstrate their benefits in terms of coding efficiency.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131222040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinwei Gao, Jian Zhang, Wenbin Che, Xiaopeng Fan, Debin Zhao
{"title":"Block-Based Compressive Sensing Coding of Natural Images by Local Structural Measurement Matrix","authors":"Xinwei Gao, Jian Zhang, Wenbin Che, Xiaopeng Fan, Debin Zhao","doi":"10.1109/DCC.2015.47","DOIUrl":"https://doi.org/10.1109/DCC.2015.47","url":null,"abstract":"Gaussian random matrix (GRM) has been widely used to generate linear measurements in compressive sensing (CS) of natural images. However, in practice, there actually exist two problems with GRM. One is that GRM is non-sparse and complicated, leading to high computational complexity and high difficulty in hardware implementation. The other is that regardless of the characteristics of signal the measurements generated by GRM are also random, which results in low efficiency of compression coding. In this paper, we design a novel local structural measurement matrix (LSMM) for block-based CS coding of natural images by utilizing the local smooth property of images. The proposed LSMM has two main advantages. First, LSMM is a highly sparse matrix, which can be easily implemented in hardware, and its reconstruction performance is even superior to GRM at low CS sampling sub rate. Second, the adjacent measurement elements generated by LSMM have high correlation, which can be exploited to greatly improve the coding efficiency. Furthermore, this paper presents a new framework with LSMM for block-based CS coding of natural images, including measurement generating, measurement coding and CS reconstruction. Experimental results show that the proposed framework with LSMM for block-based CS coding of natural images greatly enhances the existing CS coding performance when compared with other state-of-the-art image CS coding schemes.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"278 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134486294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lossless Data Compression via Substring Enumeration for k-th Order Markov Sources with a Finite Alphabet","authors":"K. Iwata, M. Arimura","doi":"10.1109/DCC.2015.51","DOIUrl":"https://doi.org/10.1109/DCC.2015.51","url":null,"abstract":"Dube and Beaudoin have proposed a technique of lossless data compression called compression via substring enumeration (CSE) for a binary source alphabet. Dube and Yokoo proved that CSE has a linear complexity both in time and in space worst-case performance for the length of string to be encoded. Dubé and Yokoo have specified appropriate predictors of the uniform and combinatorial prediction models for CSE, and proved that CSE has the asymptotic optimality for stationary binary ergodic sources. Our previous study evaluated the worst-case maximum redundancy of the modified CSE for an arbitrary binary string from the class of k-th order Markov sources. We propose a generalization of CSE for k-th order Markov sources with a finite alphabet X based on Ota and Morita in this study.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114083826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}