{"title":"New Families and New Members of Integer Sequence Based Coding Methods","authors":"Daniel Lowell, D. Tamir","doi":"10.1109/DCC.2009.87","DOIUrl":"https://doi.org/10.1109/DCC.2009.87","url":null,"abstract":"This paper presents Integer sequences that have the property of being additively and/or multiplicatively complete, Zekendorf, and unique Zekendorf. In addition, a generalized Elias coding scheme is developed. Features of Zekendorf sequence based and generalized Elias coding compression methods including compression rate, universality, asymptotic optimality, and coding complexity are analyzed.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115601447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linear Suffix Array Construction by Almost Pure Induced-Sorting","authors":"Ge Nong, Sen Zhang, W. H. Chan","doi":"10.1109/DCC.2009.42","DOIUrl":"https://doi.org/10.1109/DCC.2009.42","url":null,"abstract":"We present a linear time and space suffix array (SA) construction algorithm called the SA-IS algorithm.The SA-IS algorithm is novel because of the LMS-substrings used for the problem reduction and the pure induced-sorting (specially coined for this algorithm)used to propagate the order of suffixes as well as that of LMS-substrings, which makes the algorithm almost purely relying on induced sorting at both its crucial steps.The pure induced-sorting renders the algorithm an elegant design and in turn a surprisingly compact implementation which consists of less than 100 lines of C code.The experimental results demonstrate that this newly proposed algorithm yields noticeably better time and space efficiencies than all the currently published linear time algorithms for SA construction.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126743006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model-Guided Adaptive Recovery of Compressive Sensing","authors":"Xiaolin Wu, Xiangjun Zhang, Jia Wang","doi":"10.1109/DCC.2009.69","DOIUrl":"https://doi.org/10.1109/DCC.2009.69","url":null,"abstract":"For the new signal acquisition methodology of compressive sensing (CS) a challenge is to find a space in which the signal is sparse and hence recoverable faithfully. Given the nonstationarity of many natural signals such as images, the sparse space is varying in time or spatial domain. As such, CS recovery should be conducted in locally adaptive, signal-dependent spaces to counter the fact that the CS measurements are global and irrespective of signal structures. On the contrary existing CS reconstruction methods use a fixed set of bases (e.g., wavelets, DCT, and gradient spaces) for the entirety of a signal. To rectify this problem we propose a new model-based framework to facilitate the use of adaptive bases in CS recovery. In a case study we integrate a piecewise stationary autoregressive model into the recovery process for CS-coded images, and are able to increase the reconstruction quality by $2 thicksim 7$dB over existing methods. The new CS recovery framework can readily incorporate prior knowledge to boost reconstruction quality.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121993243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Deligiannis, A. Munteanu, T. Clerckx, P. Schelkens, J. Cornelis
{"title":"Modeling the Correlation Noise in Spatial Domain Distributed Video Coding","authors":"N. Deligiannis, A. Munteanu, T. Clerckx, P. Schelkens, J. Cornelis","doi":"10.1109/DCC.2009.37","DOIUrl":"https://doi.org/10.1109/DCC.2009.37","url":null,"abstract":"Conventional models in distributed video coding (DVC) consider the correlation noise to be distributed independently from the realization of the side-information. This paper introduces a novel model, of which the standard deviation depends spatially on the realization of the side-information. The performance penalty in video coding caused by side-information-independency assumptions is theoretical quantified and experimentally confirmed. Furthermore, inspired by the spatial side-information-dependency of the proposed model, a novel approach for estimating the correlation channel from the partial knowledge of it is introduced. The proposed technique is incorporated into a spatial-domain unidirectional DVC system, providing state-of-the-art performance.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128516508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast Data Reduction via KDE Approximation","authors":"D. Freedman, P. Kisilev","doi":"10.1109/DCC.2009.47","DOIUrl":"https://doi.org/10.1109/DCC.2009.47","url":null,"abstract":"Many of today’s real world applications need to handle and analyze continually growing amounts of data, while the cost of collecting data decreases. As a result, the main technological hurdle is that the data is acquired faster than it can be processed. Data reduction methods are thus increasingly important, as they allow one to extract the most relevant and important information from giant data sets. We present one such method, based on compressing the description length of an estimate of the probability distribution of a set points.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130377305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lossless Image Compression by PPM-Based Prediction Coding","authors":"M. Kitakami, Kensuke Tai","doi":"10.1109/DCC.2009.34","DOIUrl":"https://doi.org/10.1109/DCC.2009.34","url":null,"abstract":"Most of speech and image data compressed by lossy compression whose decompressed data are different from the original ones. Here, the different between the decompressed data and the original ones cannot be recognized by most of people. Lossless image compression, which gives exactly the same decompressed data as the original ones, is necessary for medical image, art work image, and satellite image, which are frequently processed by computers now. This paper proposes lossless image compression by prediction coding whose frequency table operation is based on PPM(Prediction by Partial Match). The prediction algorithm for the proposed method is based on that for CALIC, an existing lossless image compression method; and the difference between the predicted value and the actual one is encoded by PPM-based compression method. In this compression method, initial values in the frequency table and frequency table operation method are modified to achieve efficient compression ratio. Computer simulation says that the compression ratio of the proposed method is better than that of CALIC by about 0.07 bit/pixel.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126538319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Inverse Wavelet Transform by Compressive Sensing Decoding with Deconvolution","authors":"Dong Liu, Xiaoyan Sun, Feng Wu","doi":"10.1109/DCC.2009.19","DOIUrl":"https://doi.org/10.1109/DCC.2009.19","url":null,"abstract":"By virtue of compressive sensing (CS) that can recover sparse signals from a few linear and non-adaptive measurements, we propose an alternative decoding method for inverse wavelet transform when only partial coefficients are available. Classic CS decoding such as $l_1$-minimization indeed provides better reconstruction of sparse signals than inverse wavelet transform. Since many natural images are not sparse, we propose to further improve CS decoding from the Bayesian point of view. Specifically, as wavelet transform can be described as convolution, we present an iterative deconvolution method for CS decoding in the case of partial wavelet coefficients. Experimental results demonstrate the efficiency of our method. We conclude that such findings indicate promising applications in compression.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132583964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erickson Miranda, Guoqiang Shan, V. Megalooikonomou
{"title":"Performing Vector Quantization Using Reduced Data Representation","authors":"Erickson Miranda, Guoqiang Shan, V. Megalooikonomou","doi":"10.1109/DCC.2009.74","DOIUrl":"https://doi.org/10.1109/DCC.2009.74","url":null,"abstract":"We propose a method to improve the performance of vector quantization by using different resolutions of the dataset for each GLA iteration. We discuss the use of wavelet decomposition, principal components analysis and other data and dimensionality reduction techniques on the dataset at different stages of vector quantization. Experimental results on both real and simulated datasets show that the proposed technique outperforms ordinary vector quantization in terms of mean squared error or running time.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129627333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Cappellari, Carlos Cruz-Reyes, G. Calvagno, J. Kari
{"title":"Lossy to Lossless Spatially Scalable Depth Map Coding with Cellular Automata","authors":"L. Cappellari, Carlos Cruz-Reyes, G. Calvagno, J. Kari","doi":"10.1109/DCC.2009.41","DOIUrl":"https://doi.org/10.1109/DCC.2009.41","url":null,"abstract":"Spatially scalable image coding algorithms are mostly based on linear filtering techniques that give a multi-resolution representation of the data. Reversible cellular automata can be instead used as simpler, non-linear filter banks that give similar performance. In this paper, we investigate the use of reversible cellular automata for lossy to lossless and spatially scalable coding of smooth multi-level images, such as depth maps. In a few cases, the compression performance of the proposed coding method is comparable to that of the JBIG standard, but, under most test conditions, we show better compression performances than those obtained with the JBIG or the JPEG2000 standards. The results stimulate further investigation into cellular automata-based methods for multi-level image compression.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130428355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesc Aulí Llinàs, M. Marcellin, J. Serra-Sagristà
{"title":"Highly Accurate Distortion Estimation for JPEG2000 through PDF-Based Estimators","authors":"Francesc Aulí Llinàs, M. Marcellin, J. Serra-Sagristà","doi":"10.1109/DCC.2009.20","DOIUrl":"https://doi.org/10.1109/DCC.2009.20","url":null,"abstract":"Distortion estimation techniques are often employed in bitplane coding engines to minimize the computational load, or the memory requirements, of the encoder. A common approach is to determine distortion estimators that approximate the mean squared error decreases when data are successively coded and transmitted. Such estimators usually assume that coefficients are uniformly distributed in the quantization interval. Even though this assumption simplifies estimation, it does not exactly correspond with the nature of the signal. This work introduces new distortion estimators determined through a precise approximation of the coefficient's distribution within the quantization intervals. Experimental results obtained when our estimators are used for the post-compression rate-distortion optimization process of JPEG2000 suggest that they are able to approximate distortion with very high accuracy.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126282185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}