{"title":"Progressive Ziv-Lempel encoding of synthetic images","authors":"Derek Greene, M. Vishwanath, F. Yao, Tong Zhang","doi":"10.1109/DCC.1997.582099","DOIUrl":"https://doi.org/10.1109/DCC.1997.582099","url":null,"abstract":"Summary form only given. We describe an algorithm that gives a progression of compressed versions of a single image. Each stage of the progression is a lossy compression of the image, with the distortion decreasing in each stage, until the last image is losslessly compressed. Both compressor and decompressor make use of earlier stages to significantly improve the compression of later stages of the progression. Our algorithm uses vector quantization to improve the distortion at the beginning of the progression, and adapts Ziv and Lempel's algorithm to make it efficient for progressive encoding.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114470169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Encoding of intervals with conditional coding","authors":"U. Graf","doi":"10.1109/DCC.1997.582097","DOIUrl":"https://doi.org/10.1109/DCC.1997.582097","url":null,"abstract":"Summary form only given. With conditional coding a new technique is presented that encodes equally likely symbols of an input alphabet A (|A|=m) efficiently. The code consists of bitstrings with size n=[log/sub 2/(m)] and (n+1) and is a prefix code. The encoding needs only one comparison, one shift, and one addition per encoded symbol. Compared to the theoretical limit the method loses only at most 0.086071... bits per encoding and 0.057304... bits in average. Opposed to radix conversion (which achieves the theoretical limit) the algorithm works without multiplication and division and does not need a single-bit writing loop or bitstring arithmetic in the encoding step. Therefore it works a lot faster than radix conversion and can easily be implemented in hardware. The decoding step has the same properties. Encoding and decoding can be exchanged for better adaption to the code alphabet size.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116050276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A codebook generation algorithm for document image compression","authors":"Qin Zhang, J. Danskin, N. Young","doi":"10.1109/DCC.1997.582053","DOIUrl":"https://doi.org/10.1109/DCC.1997.582053","url":null,"abstract":"Pattern-matching based document compression systems rely on finding a small set of patterns that can be used to represent all of the ink in the document. Finding an optimal set of patterns is NP-hard; previous compression schemes have resorted to heuristics. We extend the cross-entropy approach, used previously for measuring pattern similarity, to this problem. Using this approach we reduce the problem to the fixed-cost k-median problem, for which we present a new algorithm with a good provable performance guarantee. We test our new algorithm in place of the previous heuristics (First Fit, with and without generalized Lloyd's (k-means) postprocessing steps). The new algorithm generates a better codebook, resulting in an overall improvement in compression performance of almost 17%.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121320020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A fixed-rate quantizer using block-based entropy-constrained quantization and run-length coding","authors":"Dongchang Yu, M. Marcellin","doi":"10.1109/DCC.1997.582054","DOIUrl":"https://doi.org/10.1109/DCC.1997.582054","url":null,"abstract":"A fast and efficient quantization technique is described. It is fixed-length, robust to bit errors, and compatible with most current compression standards. It is based on entropy-constrained quantization and uses the well-known and efficient Viterbi algorithm to force the coded sequence to be fixed-rate. Run-length coding techniques are used to improve the performance at low encoding rates. Simulation results show that it can achieve performance comparable to that of Huffman coded entropy-constrained scalar quantization with computational complexity increasing only linearly in block length.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123896360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On adaptive strategies for an extended family of Golomb-type codes","authors":"G. Seroussi, M. Weinberger","doi":"10.1109/DCC.1997.581993","DOIUrl":"https://doi.org/10.1109/DCC.1997.581993","url":null,"abstract":"Off-centered, two-sided geometric distributions of the integers are often encountered in lossless image compression applications, as probabilistic models for prediction residuals. Based on a recent characterization of the family of optimal prefix codes for these distributions, which is an extension of the Golomb (1966) codes, we investigate adaptive strategies for their symbol-by-symbol prefix coding, as opposed to arithmetic coding. Our strategies allow for adaptive coding of prediction residuals at very low complexity. They provide a theoretical framework for the heuristic approximations frequently used when modifying the Golomb code, originally designed for one-sided geometric distributions of non-negative integers, so as to apply to the encoding of any integer.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127664618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conditional entropy coding of VQ indexes for image compression","authors":"Xiaolin Wu, Jiang Wen, W. H. Wong","doi":"10.1109/DCC.1997.582058","DOIUrl":"https://doi.org/10.1109/DCC.1997.582058","url":null,"abstract":"Vector quantization (VQ) is a source coding methodology with provable rate-distortion optimality. However, despite more than two decades of intensive research, VQ theoretical promise is yet to be fully realized in image compression practice. Restricted by high VQ complexity in dimensions and due to high-order sample correlations in images, block sizes of practical VQ image coders are hardly large enough to achieve the rate-distortion optimality. Among the large number of VQ variants in the literature, a technique called address VQ (A-VQ) by Nasrabadi and Feng (1990) achieved the best rate-distortion performance so far to the best of our knowledge. The essence of A-VQ is to effectively increase VQ dimensions by a lossless coding of a group of 16-dimensional VQ codewords that are spatially adjacent. From a different perspective, we can consider a signal source that is coded by memoryless basic VQ to be just another signal source whose samples are the indices of the memoryless VQ codewords, and then induce the problem of lossless compression of the VQ-coded source. If the memoryless VQ is not rate-distortion optimal (often the case in practice), then there must exist hidden structures between the samples of VQ-coded source (VQ codewords). Therefore, an alternative way of approaching the rate-distortion optimality is to model and utilize these inter-codewords structures or correlations by context modeling and conditional entropy coding of VQ indexes.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130441020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual rate control algorithms for fax-based video compression","authors":"Yi-Jen Chin, T. Berger","doi":"10.1109/DCC.1997.582086","DOIUrl":"https://doi.org/10.1109/DCC.1997.582086","url":null,"abstract":"Summary form only given. Video samples usually are predicted from coded versions of nearby samples sent either earlier in the same frame or in the previous frame. Analysis of the human vision system (HVS) suggests that we may not need to correct values of residuals that do not exceed a perceptual threshold sometimes referred to in the literature of perception as the just-noticeable-distortion (JND). The ideal JND provides each pixel being coded with a threshold level below which discrepancies are perceptually distortion-free. Also of interest is the rate control analysis of noticeable, above threshold distortions that inevitably result at low bit rates. Because facsimile-based video compression (FBVC) processing is done in the spatio-temporal pixel domain, we can exploit the local psycho-perceptual properties of the HVS. Our proposed rate control algorithms are distinguished by being computationally economical, transform-free, devoid of block-based artifacts, and capable of easily providing a constant bit rate video stream.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125912440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Progressive image transmission: an adaptive quadtree-pruning approach","authors":"C. Bajaj, Guozhong Zhuang","doi":"10.1109/DCC.1997.582075","DOIUrl":"https://doi.org/10.1109/DCC.1997.582075","url":null,"abstract":"Summary form only given. Progressive, adaptive and hierarchical modes are desirable image coding features. This paper presents a quadtree-pruning pyramid coding scheme satisfying all these objectives. Pyramid coding is an approach suitable for progressive image transmission, where the original image is divided into different levels that correspond to successive approximants of the original one. Starting from the original image, a sequence of reduced-size images is formed by averaging intensity values over 2/spl times/2-pixel blocks. This sequence, called the mean pyramid, ends with an image with only one pixel. Then another sequence of images, called the difference pyramid which can be further encoded via vector quantization, is formed by taking the difference of two consecutive images in the mean pyramid. Our quadtree-pruning approach uses only the mean pyramid. Experiments show that the quadtree-pruning pyramid method is quite efficient for lossy compression. Our approach can also be used for lossless compression by simply setting the threshold function to be zero.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126873208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Universal transform coding based on backward adaptation","authors":"Vivek K Goyal, Jun Zhuang, M. Vetterli","doi":"10.1109/DCC.1997.582046","DOIUrl":"https://doi.org/10.1109/DCC.1997.582046","url":null,"abstract":"The method for universal transform coding based on backward adaptation introduced by Goyal et al. (see IEEE Int. Conf. Image Proc., vol.II, p.365-8, 1996) is reviewed and further analyzed. This algorithm uses a linear transform which is periodically updated based on a local Karhunen-Loeve transform (KLT) estimate. The KLT estimate is derived purely from quantized data, so the decoder can track the encoder state without any side information. The effect of estimating only from quantized data is quantitatively analyzed. Two convergence results which hold in the absence of estimation noise are presented. The first applies for any vector dimension but does not preclude the necessity of a sequence of quantization step sizes that goes to zero. The second applies only in the two-dimensional case, but shows local convergence for a fixed, sufficiently small quantization step size. Refinements which reduce the storage and computational requirements of the algorithm are suggested.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128473848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video compression with weighted finite automata","authors":"J. Albert, S. Frank, U. Hafner, M. Unger","doi":"10.1109/DCC.1997.582071","DOIUrl":"https://doi.org/10.1109/DCC.1997.582071","url":null,"abstract":"Summary form only given. Weighted finite automata (WFA) exploit self-similarities within single images and also video streams to remove spatial and temporal redundancies. The WFA image codec combines techniques from fractal image compression and vector-quantization to achieve performance results for low bit-rates which can be put on a par with state-of-the-art codecs like embedded zerotree wavelet coding. Moreover, frame regeneration of WFA encoded video streams is faster than that of wavelet coded video streams due to the simple mathematical structure of WFA. Therefore, WFA were chosen as a starting point for a fractal-like video compression with hierarchical motion-compensation. Video streams are structured as proposed by the MPEG standards: the entire video is subdivided into several groups of pictures which are made up of one I-frame and a given number of predicted B- or P-frames. The macro block concept of the MPEG standard is replaced by a hierarchical and adaptive image partitioning. We integrated motion compensation with variable block sizes into the WFA coder to exploit the inter-frame redundancy. The general concept of the WFA compression was retained since it already provides a hierarchical subdivision of the image. The video stream is encoded frame by frame with an improved version of the WFA inference algorithm.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128506193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}