{"title":"LZ77-Like Compression with Fast Random Access","authors":"Sebastian Kreft, G. Navarro","doi":"10.1109/DCC.2010.29","DOIUrl":"https://doi.org/10.1109/DCC.2010.29","url":null,"abstract":"We introduce an alternative Lempel-Ziv text parsing, LZ-End, that converges to the entropy and in practice gets very close to LZ77. LZ-End forces sources to finish at the end of a previous phrase. Most Lempel-Ziv parsings can decompress the text only from the beginning. LZ-End is the only parsing we know of able of decompressing arbitrary phrases in optimal time, while staying closely competitive with LZ77, especially on highly repetitive collections, where LZ77 excells. Thus LZ-End is ideal as a compression format for highly repetitive sequence databases, where access to individual sequences is required, and it also opens the door to compressed indexing schemes for such collections.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126290219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural Markovian Predictive Compression: An Algorithm for Online Lossless Data Compression","authors":"Erez Shermer, M. Avigal, Dana Shapira","doi":"10.1109/DCC.2010.26","DOIUrl":"https://doi.org/10.1109/DCC.2010.26","url":null,"abstract":"This work proposes a novel practical and general-purpose lossless compression algorithm named Neural Markovian Predictive Compression (NMPC), based on a novel combination of Bayesian Neural Networks (BNNs) and Hidden Markov Models (HMM). The result is an interesting combination of properties: Linear processing time, constant memory storage performance and great adaptability to parallelism. Though not limited for such uses, when used for online compression (compressing streaming inputs without the latency of collecting blocks) it often produces superior results compared to other algorithms for this purpose. It is also a natural algorithm to be implemented on parallel platforms such as FPGA chips.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126725487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"File-Size Preserving LZ Encoding for Reversible Data Embedding","authors":"H. Yokoo","doi":"10.1109/DCC.2010.78","DOIUrl":"https://doi.org/10.1109/DCC.2010.78","url":null,"abstract":"Methods for recycling the redundancy due to reference multiplicity in the LZ77 algorithm have already been proposed. These methods can be characterized in terms of reversible data embedding. In these methods, the redundancy in LZ77 is used to embed extra information in codewords. This paper proposes an LZ77 variation that specializes in reversible data embedding. The proposed encoding algorithm performs neither compression nor expansion. Instead, it embeds maximum possible extra information without changing the input file size. The asymptotic embedding capacity of this algorithm is evaluated, and it is shown that a duality relation exists between compressibility and embedding capacity.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"224 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127297587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subsampling-Adaptive Directional Wavelet Transform for Image Coding","authors":"Jizheng Xu, Feng Wu","doi":"10.1109/DCC.2010.15","DOIUrl":"https://doi.org/10.1109/DCC.2010.15","url":null,"abstract":"In lifting-based directional wavelet transforms, different subsampling patterns may show significant difference for directional signals in image coding. This paper investigates the influence of subsampling in directional wavelet transform. We show that the best subsampling depends on the direction and the directionality strength of the signal. To improve the coding performance, we further propose a subsampling-adaptive directional wavelet transform, which can use different subsampling patterns adaptively and according to the local characteristics of the image. To handle the boundary transition when subsampling changes, a phase completion process is applied to ensure that wavelet transform with various subsampling can be performed without introducing boundary effects and performance loss. Experimental results show that the proposed transform can achieve significant coding gain in image coding compared to other existing directional wavelet transforms.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132186489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Matrix Completion Approach to Reduce Energy Consumption in Wireless Sensor Networks","authors":"A. Majumdar, R. Ward","doi":"10.1109/DCC.2010.66","DOIUrl":"https://doi.org/10.1109/DCC.2010.66","url":null,"abstract":"The main challenge faced by wireless sensor networks today is the problem of power consumption at the sensor nodes. Over time, researchers have developed different strategies to address this issue. Such strategies are strongly model dependent and/or application specific. In this work, we take a fresh look at the problem of power consumption in wireless sensor networks from a signal processing perspective. The main idea is simple. Sample only a subset of all the sensor nodes at a given instant and transmit them (this reduces both sampling and communication cost for all the nodes combined). At the central unit (sink) use smart mathematical tools (matrix completion algorithms) to estimate the data for the entire network. We have showed that, if about 1% reconstruction error is allowed, only 20% of the sensors need to sample and transmit at a given instant. This means on an average the life of the network is increased 5-fold. If more error reconstruction error is allowed, even lesser number of sensors need to be active at a given instant leading to more prolonged life of the network.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133406340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Approach to Time-Frequency Analysis","authors":"Xiteng Liu","doi":"10.1109/DCC.2010.79","DOIUrl":"https://doi.org/10.1109/DCC.2010.79","url":null,"abstract":"Time-frequency analysis is the fundamental methodology in signal processing. Conventionally, in applications of time-frequency analysis, the time domain of a signal is partitioned into intervals at first. Interval by interval, one computes the local frequency spectrum and then makes signal processing with respect to this local spectrum. The whole procedure goes on while time interval changes. This classic approach neglects the dependency between local frequency spectra. In this paper, we change the way and advocate a new approach by which we make signal processing within frequency bands rather than time intervals. New approach provides a good platform for exploiting the dependency between local frequency spectra. Furthermore, we extend technical design to theoretical constructions. Moreover, we apply new approach to data compression. Experiment results show that it may significantly improve technical performance without increasing computation load.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"23 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131096458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Adiego, Miguel A. Martínez-Prieto, Javier E. Hoyos-Torío, F. Sánchez-Martínez
{"title":"Modelling Parallel Texts for Boosting Compression","authors":"J. Adiego, Miguel A. Martínez-Prieto, Javier E. Hoyos-Torío, F. Sánchez-Martínez","doi":"10.1109/DCC.2010.86","DOIUrl":"https://doi.org/10.1109/DCC.2010.86","url":null,"abstract":"Bilingual parallel corpora, also know as bitexts, convey the same information in two different languages. This implies that when modelling bitexts one can take advantage of the fact that there exists a relation between both texts; the text alignment task allow to establish such relationship. In this paper we propose different approaches that use words and biwords (pairs made of two words, each one from a different text) as representation symbolic units. The properties of these approaches are analyzed from a statistical point of view and tested as a preprocessing step to general purpose compressors. The results obtained suggest interesting conclusions concerning the use of both words and biwords. When encoded models are used as compression boosters we achieve compression ratios improving state-of-the-art compressors up to 6.5 percentage points, being up to 40% faster.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132271928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Bouttefroy, A. Bouzerdoum, Azeddine Beghdadi, S. L. Phung
{"title":"Multi-resolution Mean-Shift Algorithm for Vector Quantization","authors":"P. Bouttefroy, A. Bouzerdoum, Azeddine Beghdadi, S. L. Phung","doi":"10.1109/DCC.2010.55","DOIUrl":"https://doi.org/10.1109/DCC.2010.55","url":null,"abstract":"This paper presents a new multi-resolution mean-shift algorithm for vector quantization of high-resolution images and generation of a stratified codebook. The algorithm employs the discrete wavelet transform (DWT) to circumvent the problem of bandwidth selection with the mean-shift algorithm. Here, the mean-shift algorithm is applied to a reduced set of samples in the color space. The detection of salient edges is performed on the DWT subbands for each level of decomposition to identify the pixels escaping the basin of attraction. The quantized image is reconstructed by upward interpolation of the salient pixels in the feature space. We also propose a Modified-Weighted mean-shift algorithm to speed up the image reconstruction stage. Experiments show that the proposed multi-resolution mean-shift provides significant speed-up compared to the Linde-Buzo-Gray (LBG) algorithm.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116172276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Hybrid Media Transmission Scheme for Wireless VoIP","authors":"A. Khalifeh, H. Yousefi’zadeh","doi":"10.1109/DCC.2010.48","DOIUrl":"https://doi.org/10.1109/DCC.2010.48","url":null,"abstract":"In this paper, we propose an optimization framework for real-time voice transmission overwireless tandem channels prone to both bit errors and packet erasures. Utilizing a hybrid mediadependent and media independent error correction scheme, our proposed framework is capable ofprotecting voice packets against both types of errors. For each group of frames associated with onespeech spurt, the framework finds the optimal parity assignment of each voice frame according toits perceptual importance such that the quality of the received group of frames is maximized. Ourperformance evaluation results show that the proposed scheme outperforms a number of alternativeschemes and has a low computational complexity.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126043251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data Compression Technology Dedicated to Distribution and Embedded Systems","authors":"J. Odagiri, Noriko Itani, Y. Nakano, D. Culler","doi":"10.1109/DCC.2010.73","DOIUrl":"https://doi.org/10.1109/DCC.2010.73","url":null,"abstract":"In distribution and embedded systems, data compression is often used to reduce the size of flash RAM and transmission data, while a rapid decompression speed enables faster rebooting of the compressed program code. We have developed a new data compression algorithm with a high decompression speed and a good compression rate that is equivalent to zlib, the standard technology in use today. We created a LZSS-based algorithm by optimizing the parsing of data strings. LZSS is known as a high decompression speed algorithm useful for embedded systems, and optimal parsing is well known as a method for improving compression rates [1]. Previously, this combination had not been implemented because statistical code length varies during optimal parsing [1]. Our algorithm overcomes this problem by calculating the probability of the literal or the code ( distance and length ) solving the shortest path problem first. It then constructs a simple code set that enables fast decompression using those probabilities and solves the shortest path problem again. Experiments on the standard evaluation data and wireless sensor network program [2] demonstrated that we can achieve a high compression rate equivalent to zlib and a decompression speed that is twice as fast.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128283161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}