{"title":"Matching pursuit video coding at very low bit rates","authors":"Ralph Neff, A. Zakhor","doi":"10.1109/DCC.1995.515531","DOIUrl":"https://doi.org/10.1109/DCC.1995.515531","url":null,"abstract":"Matching pursuits refers to a greedy algorithm which matches structures in a signal to a large dictionary of functions. In this paper, we present a matching-pursuit based video coding system which codes motion residual images using a large dictionary of Gabor functions. One feature of our system is that bits are assigned progressively to the highest-energy areas in the motion residual image. The large dictionary size is another advantage, since it allows structures in the motion residual to be represented using few significant coefficients. Experimental results compare the performance of the matching-pursuit system to a hybrid-DCT system at various bit rates between 6 and 128 kbit/s. Additional experiments show how the matching pursuit system performs if the Gabor dictionary is replaced by an 8/spl times/8 DCT dictionary.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126964880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comparison of the Z, E/sub 8/ and Leech lattices for image subband quantization","authors":"Zheng Gao, Feng Chen, B. Belzer, J. Villasenor","doi":"10.1109/DCC.1995.515521","DOIUrl":"https://doi.org/10.1109/DCC.1995.515521","url":null,"abstract":"Lattice vector quantization schemes offer high coding efficiency without the burden associated with generating and searching a codebook. The distortion associated with a given lattice is often expressed in terms of the G number, which is a measure of the mean square error per dimension generated by quantization of a uniform source. Subband image coefficients, however, are best modeled by a generalized Gaussian, leading to distortion characteristics that are quite different from those encountered for uniform, Laplacian, or Gaussian sources. We present here the distortion associated with Z, E/sub 8/, and Leech lattice quantization for coding of generalized Gaussian sources, and show that for low bit rates the Z lattice offers both the best performance and the lowest implementational complexity.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127463324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel algorithms for the static dictionary compression","authors":"H. Nagumo, Mi Lu, K. Watson","doi":"10.1109/DCC.1995.515506","DOIUrl":"https://doi.org/10.1109/DCC.1995.515506","url":null,"abstract":"Studies parallel algorithms for two static dictionary compression strategies. One is the optimal dictionary compression with dictionaries that have the prefix property, for which our algorithm requires O(L+log n) time and O(n) processors, where L is the maximum allowable length of the dictionary entries, while previous results run in O(L+log n) time using O(n/sup 2/) processors, or in O(L+log/sup 2/n) time using O(n) processors. The other algorithm is the longest-fragment-first (LFF) dictionary compression, for which our algorithm requires O(L+log n) time and O(nL) processors, while the previous result has O(L log n) time performance on O(n/log n) processors. We also show that the sequential LFF dictionary compression can be computed online with a lookahead of length O(L/sup 2/).","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128859471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The effect of non-greedy parsing in Ziv-Lempel compression methods","authors":"R. Horspool","doi":"10.1109/DCC.1995.515520","DOIUrl":"https://doi.org/10.1109/DCC.1995.515520","url":null,"abstract":"Most practical compression methods in the LZ77 and LZ78 families parse their input using a greedy heuristic. However the popular gzip compression program demonstrates that modest but significant gains in compression performance are possible if non-greedy parsing is used. Practical implementations for using non-greedy parsing in LZ77 and LZ78 compression are explored and some experimental measurements are presented.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122321607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vector quantization and clustering: a pyramid approach","authors":"D. Tamir, Chi-Yeon Park, Wook-Sung Yoo","doi":"10.1109/DCC.1995.515592","DOIUrl":"https://doi.org/10.1109/DCC.1995.515592","url":null,"abstract":"A multi-resolution K-means clustering method is presented. Starting with a low resolution sample of the input data the K-means algorithm is applied to a sequence of monotonically increasing-resolution samples of the given data. The cluster centers obtained from a low resolution stage are used as initial cluster centers for the next stage which is a higher resolution stage. The idea behind this method is that a good estimation of the initial location of the cluster centers can be obtained through K-means clustering of a sample of the input data. K-means clustering of the entire data with the initial cluster centers estimated by clustering a sample of the input data, reduces the convergence time of the algorithm.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127530741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lossless region coding schemes","authors":"M. Turner","doi":"10.1109/DCC.1995.515591","DOIUrl":"https://doi.org/10.1109/DCC.1995.515591","url":null,"abstract":"Summary form only given. The use of describing regions as separate entities within an image has been applied within specific fields of image compression for many years. This study hopes to show that the technique, when applied with care, is practical for virtually all image types. Three different schemes for segmenting and coding an image have been considered: array covering; region numbering; and edge following.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132825522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time VLSI compression for high-speed wireless local area networks","authors":"Bongjin Jung, W. Burleson","doi":"10.1109/DCC.1995.515541","DOIUrl":"https://doi.org/10.1109/DCC.1995.515541","url":null,"abstract":"Summary form only presented; substantially as follows. Presents a new compact, power-efficient, and scalable VLSI array for the first Lempel-Ziv (LZ) algorithm to be used in high-speed wireless data communication systems. Lossless data compression can be used to inexpensively halve the amount of data to be transmitted, thus improving the effective bandwidth of the communication channel and in turn, the overall network performance. For wireless networks, the data rate and latency requirement are appropriate for a dedicated VLSI implementation of LZ compression. The nature of wireless networks requires that any additional VLSI hardware also be small, low-power and inexpensive. The architecture uses a novel custom systolic array and a simple dictionary FIFO which is implemented using conventional SRAM. The architecture consists of M simple processing elements where M is the maximum length of the string to be replaced with a codeword, which for practical LAN applications, can range from 16 to 32. The systolic cell has been optimized to remove any superfluous state information or logic, thus making it completely dedicated to the task of LZ compression. A prototype chip has been implemented using 2 /spl mu/s CMOS technology. Using M=32, and assuming a 2:1 compression ratio, the system can process approximately 90 Mbps with a 100 MHz clock rate.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132117354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new approach to scalable video coding","authors":"W. Chung, F. Kossentini, Mark J. T. Smith","doi":"10.1109/DCC.1995.515528","DOIUrl":"https://doi.org/10.1109/DCC.1995.515528","url":null,"abstract":"This paper introduces a new framework for video coding that facilitates operation over a wide range of transmission rates. The new method is a subband coding approach that employs motion compensation, and uses prediction-frame and intra-frame coding within the framework. It is unique in that it allows lossy coding of the motion vectors through its use of multistage residual vector quantization (RVQ). Furthermore, it selects the motion vector with the best rate-distortion tradeoff among a number of possible motion vector candidates, and provides a rate-distortion-based mechanism for alternating between intra-frame and inter-frame coding. The framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130998899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convergence of fractal encoded images","authors":"J. Kominek","doi":"10.1109/DCC.1995.515514","DOIUrl":"https://doi.org/10.1109/DCC.1995.515514","url":null,"abstract":"Fractal image compression, despite its great potential, suffers from some flaws that may prevent its adaptation from becoming more widespread. One such problem is the difficulty of guaranteeing convergence, let alone a specific error tolerance. To help surmount this problem, we have introduced the terms compound, cycle, and partial contractivity concepts indispensable for understanding convergence of fractal images. Most important, they connect the behavior of individual pixels to the image as a whole, and relate such behavior to the component affine transforms.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130970659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantization distortion in block transform-compressed data","authors":"A. Boden","doi":"10.1109/DCC.1995.515537","DOIUrl":"https://doi.org/10.1109/DCC.1995.515537","url":null,"abstract":"Summary form only given, as follows. The JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into blocks that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. Block transform compression schemes exhibit sharp discontinuities at data block boundaries: this phenomenon is a visible manifestation of the compression quantization distortion. For example, in compression algorithms such as JPEG these blocking effects manifest themselves visually as discontinuities between adjacent 8×8 pixel image blocks. In general the distortion characteristics of block transform-based compression techniques are understandable in terms of the properties of the transform basis functions and the transform coefficient quantization error. In particular, the blocking effects exhibited by JPEG are explained by two simple observations demonstrated in this work: a disproportionate fraction of the total quantization error accumulates on block edge pixels; and the quantization errors among pixels within a compression block are highly correlated, while the quantization errors between pixels in separate blocks are uncorrelated. A generic model of block transform compression quantization noise is introduced, applied to synthesized and real one and two dimensional data using the DCT as the transform basis, and results of the model are shown to predict distortion patterns observed in data compressed with JPEG.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"21 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123873608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}