{"title":"On the coding delay of a general coder","authors":"M. Weinberger, A. Lempel, J. Ziv","doi":"10.1109/DCC.1992.227471","DOIUrl":"https://doi.org/10.1109/DCC.1992.227471","url":null,"abstract":"The authors propose a general model for a sequential coder, and investigate the associated coding delay. This model is employed to derive lower and upper bounds on the delay associated with commonly used encoders and decoders for noiseless data compression.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126317610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. R. Epstein, R. Hingorani, J. M. Shapiro, M. Czigler
{"title":"Multispectral KLT-wavelet data compression for Landsat thematic mapper images","authors":"B. R. Epstein, R. Hingorani, J. M. Shapiro, M. Czigler","doi":"10.1109/DCC.1992.227461","DOIUrl":"https://doi.org/10.1109/DCC.1992.227461","url":null,"abstract":"The authors report a methodology that enhances the compression of Landsat thematic mapper (TM) multispectral imagery, while reducing the image information loss. The method first removes interband correlation of the image data by use of the Karhunen-Loeve transform (KLT) to produce the image principal components. Each principal component is spatially decorrelated using a discrete wavelet transform. The resulting coefficients are then quantized and losslessly encoded. Image compressions of typically 80:1 demonstrate that the method should be quite suitable for rapid browsing applications where small amounts of image loss are tolerable.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134234205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compression of grey-scale fingerprint images","authors":"T. Hopper, F. Preston","doi":"10.1109/DCC.1992.227450","DOIUrl":"https://doi.org/10.1109/DCC.1992.227450","url":null,"abstract":"Investigates a number of techniques developed for fingerprint identification. The test implementations include: ISO/CCITT JPEG developed cosine transform; local cosine transform; best basis-adaptive wavelet transform plus uniform quantisation; wavelet vector quantisation; and wavelet scaler quantisation. All of the above algorithms are viewed from a multifrequency or subband decomposition perspective. The results of ad hoc tests are summarised.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132234808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On binary alphabetical codes","authors":"D. Sheinwald","doi":"10.1109/DCC.1992.227470","DOIUrl":"https://doi.org/10.1109/DCC.1992.227470","url":null,"abstract":"Binary alphabetical codes, which are prefix free, fixed-to-variable binary codes for discrete memoryless sources, in which the lexicographic order of the codewords agrees with the alphabet order of the respective source letters, are studied. A necessary and sufficient condition on the sequence of codeword length of any such code is proved. A new upper bounds on the redundancy of alphabetical codes relative to the optimal prefix free, fixed-to-variables codes-the Huffman codes-is proved. An adaptation of the Ziv-Lempel algorithm making it lexicographic order preserving, without any additional redundancy, is presented.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129828822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A split-merge parallel block-matching algorithm for video displacement estimation","authors":"B. Carpentieri, J. Storer","doi":"10.1109/DCC.1992.227457","DOIUrl":"https://doi.org/10.1109/DCC.1992.227457","url":null,"abstract":"Motion compensation is one of the most effective techniques used in interframe data compression. The authors present a parallel block-matching algorithm for estimating interframe displacement of small blocks with minimum error. The algorithm is designed for a grid architecture to process video in real time. The blocks may have variable size depending on a split-and-merge technique. The algorithm performs a segmentation of the image into regions (objects) moving in the same direction and uses this knowledge to improve the transmission of the displacement vectors.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130893429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Universal coding of band-limited sources by sampling and dithered quantization","authors":"R. Zamir, M. Feder","doi":"10.1109/DCC.1992.227448","DOIUrl":"https://doi.org/10.1109/DCC.1992.227448","url":null,"abstract":"The authors analyze a scheme for encoding continuous time band-limited signals in which the input is sampled at Nyquist's rate or faster, the samples undergo dithered uniform or lattice quantization and the quantizer output is entropy coded. This analysis leads to explicit expressions for the trade-off between sampling rate and quantization accuracy. Also, they provide expression for the scheme's redundancy (i.e. its excess rate over the rate distortion function) in terms of the both the sampling rate and quantization resolution parameters.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122724972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Error modeling for hierarchical lossless image compression","authors":"P. Howard, J. Vitter","doi":"10.1109/DCC.1992.227454","DOIUrl":"https://doi.org/10.1109/DCC.1992.227454","url":null,"abstract":"The authors present a new method for error modeling applicable to the multi-level progressive (MLP) algorithm for hierarchical lossless image compression. This method, based on a concept called the variability index, provides accurate models for pixel prediction errors without requiring explicit transmission of the models. They also use the variability index to show that prediction errors do not always follow the Laplace distribution, as is commonly assumed; replacing the Laplace distribution with a more general distribution further improves compression. They describe a new compression measurement called compression gain, and give experimental results showing that the using variability index gives significantly better compression than other methods in the literature.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123405347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Complexity optimized vector quantization: a neural network approach","authors":"J. Buhmann, H. Kühnel","doi":"10.1109/DCC.1992.227480","DOIUrl":"https://doi.org/10.1109/DCC.1992.227480","url":null,"abstract":"The authors discuss a vector quantization strategy which jointly optimizes distortion errors and complexity costs. A maximum entropy estimation of the vector quantization cost function yields an optimal codebook size, the reference vectors and the assignment frequencies. They compare different complexity measures for the design of image compression algorithms which quantize wavelet decomposed images. An online version of complexity optimized vector quantization is implemented by an artificial neural network with winner-take-all connectivity. Their approach establishes a unifying framework for different quantization methods like K-means clustering and its fuzzy version, entropy constrained vector quantization or self-organizing topological maps and competitive neural networks.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124487087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comparison of codebook generation techniques for vector quantization","authors":"R. Sproull, I. Sutherland","doi":"10.1109/DCC.1992.227469","DOIUrl":"https://doi.org/10.1109/DCC.1992.227469","url":null,"abstract":"The paper examines tradeoffs between speed and quality of codebook/generation algorithms and offers new ways to produce excellent codebooks with only modest computation cost. It compares the performance of four algorithms for constructing codebooks. The LBG method of Linde, Buzo, and Gray (1980) produces the best codebooks but requires the most computation. The method by Equitz (1987, 1989) produces codebooks nearly as good and requires somewhat less computation. It describes a new method based on eigenvector subdivision that produces useable codebooks in a fraction of the computational effort of either of the other methods. A fourth hybrid method yields very good codebooks with modest computation by using the eigenvector subdivision method to obtain a first approximation that is refined with LBG optimization.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115949987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convolutional interpolative coding algorithms","authors":"M. Khansari, I. Widjaja, A. Leon-Garcia","doi":"10.1109/DCC.1992.227460","DOIUrl":"https://doi.org/10.1109/DCC.1992.227460","url":null,"abstract":"The authors proposed a method for decoding interpolatively encoded data. This class of coding scheme achieves higher estimation gain and are symmetric with respect to time which makes them a good candidate for storage application. They showed that different trade-off parameters are involved and investigated their relationships. These parameters are estimation gain, delay experienced by the encoder and the decoder and end-to-end signal-to-noise ratio. They also showed the implementation and the effect of incorporating quantizers in the circuits. Specifically, they investigated two extreme open and closed loop architectures and compared their performances. Generalization of the above algorithm to noise feedback coding can be achieved easily.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127479661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}