{"title":"Parallel algorithms for optimal compression using dictionaries with the prefix property","authors":"S. Agostino, J. Storer","doi":"10.1109/DCC.1992.227476","DOIUrl":"https://doi.org/10.1109/DCC.1992.227476","url":null,"abstract":"The authors study parallel algorithms for lossless data compression via textual substitution. Dynamic dictionary compression is known to be P-complete, however, if the dictionary is given in advance, they show that compression can be efficiently parallelized and a computational advantage is obtained when the dictionary has the prefix property. The approach can be generalized to the sliding window method where the dictionary is a window that passes continuously from left to right over the input string.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116645941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real time implementation of pruned tree search vector quantization","authors":"A. Madisetti, R. Jain, R. Baker","doi":"10.1109/DCC.1992.227466","DOIUrl":"https://doi.org/10.1109/DCC.1992.227466","url":null,"abstract":"Discusses the design of a CMOS integrated circuit for real time vector quantization (VQ) of images at MPEG rates. It has been designed as a slave processor which can implement binary, non-binary, and pruned tree search VQ algorithms. Inputs include the image source vectors, the VQ codevectors and external control signals that direct the search. The chip outputs the index of the codevector that best approximates the input in a mean square error sense. The layout has been generated using a 1.2 mu CMOS library and measures 5.76*6.6 mm/sup 2/. Critical path simulation with SPICE indicates a maximum clock rate of 40 MHz.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122390411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A forward-mapping realization of the inverse discrete cosine transform","authors":"L. McMillan, L. Westover","doi":"10.1109/DCC.1992.227459","DOIUrl":"https://doi.org/10.1109/DCC.1992.227459","url":null,"abstract":"The paper presents a new realization of the inverse discrete cosine transform (IDCT). It exploits both the decorrelation properties of the discrete cosine transform (DCT) and the quantization process that is frequently applied to the DCT's resultant coefficients. This formulation has several advantages over previous approaches, including the elimination of multiplies from the central loop of the algorithm and its adaptability to incremental evaluation. The technique provides a significant reduction in computational requirements of the IDCT, enabling a software-based implementation to perform at rates which were previously achievable only through dedicated hardware.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124423504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model based concordance compression","authors":"A. Bookstein, S. T. Klein, T. Raita","doi":"10.1109/DCC.1992.227473","DOIUrl":"https://doi.org/10.1109/DCC.1992.227473","url":null,"abstract":"The authors discuss concordance compression using the framework now customary in compression theory. They begin by creating a mathematical model of concordance generation, and then use optimal compression engines, such as Huffman or arithmetic coding, to do the actual compression. It should be noted that in the context of a static information retrieval system, compression and decompression are not symmetrical tasks. Compression is done only once, while building the system, whereas decompression is needed during the processing of every query and directly affects the response time. One may thus use extensive and costly preprocessing for compression, provided reasonably fast decompression methods are possible. Moreover, compression is applied to the full files (text, concordance, etc.), but decompression is needed only for (possibly many) short pieces, which may be accessed at random by means of pointers to their exact locations. Therefore the use of adaptive methods based on tables that systematically change from the beginning to the end of the file is ruled out. However, their concern is less the speed of encoding or decoding than relating concordance compression conceptually to the modern approach of data compression, and testing the effectiveness of their models.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126587116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The use of fractal theory in a video compression system","authors":"Maaruf Ali, C. Papadopoulos, T. Clarkson","doi":"10.1109/DCC.1992.227455","DOIUrl":"https://doi.org/10.1109/DCC.1992.227455","url":null,"abstract":"The paper describes how fractal coding theory may be applied to compress video images using an image resampling sequencer (IRS) in a video compression system on a modular image processing system. It describes the background theory of image (image) coding using a form of fractal equation known as iterated function system (IFS) codes. The second part deals with the modular image processing system on which to implement these operations. It briefly covers how IFS codes may be calculated. It is shown how the IRS and 2/sup nd/ order geometric transformations may be used to describe inter-frame changes to compress motion video.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126322606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optical techniques for image compression","authors":"J. Reif, A. Yoshida","doi":"10.1109/DCC.1992.227478","DOIUrl":"https://doi.org/10.1109/DCC.1992.227478","url":null,"abstract":"Optical computing has recently become a very active research field. The advantage of optics is its capability of providing highly parallel operations in a three dimensional space. The authors propose optical architectures to execute various image compression techniques. They optically implement the following compression techniques: transform coding; vector quantization; and interframe coding; They show many generally used transform coding methods, for example, the cosine transform, can be implemented by a simple optical system. The transform coding can be carried out in constant time. Most of this paper is concerned with a sophisticated optical system for vector quantization using holographic associative matching. Holographic associative matching provided by multiple exposure holograms can offer advantageous techniques for vector quantization based compression schemes. Photorefractive crystals, which provide high density recording in real time, are used as the holographic media. The reconstruction alphabet can be dynamically constructed through training or stored in the photorefractive crystal in advance. Encoding a new vector can be carried out by holographic associative matching in constant time. An extension to interframe coding is also discussed.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124052482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image reconstruction for hybrid video coding systems","authors":"Qin-Fan Zhu, Yao Wang, Leonard Shaw","doi":"10.1109/DCC.1992.227458","DOIUrl":"https://doi.org/10.1109/DCC.1992.227458","url":null,"abstract":"Presents a new technique for image reconstruction from partially received information for hybrid video coding systems using DCT and motion compensated prediction and interpolation. The technique makes use of the smoothness property of typical video signals by requiring the reconstructed samples be smoothly connected with their adjacent samples, both spatially and temporally. This is fulfilled by minimizing the differences between neighboring pixels in the current as well as adjacent frames. The optimal solution is obtained through three linear transformations. This approach can yield more satisfactory results than the existing algorithms, especially for images with large motions or scene changes.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128777541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the JPEG model for lossless image compression","authors":"G. Langdon, A. Gulati, E. Seiler","doi":"10.1109/DCC.1992.227464","DOIUrl":"https://doi.org/10.1109/DCC.1992.227464","url":null,"abstract":"The JPEG lossless arithmetic coding algorithm and a predecessor algorithm called Sunset both employ adaptive arithmetic coding with the context model and parameter reduction approach of Todd et al. The authors compare the Sunset and JPEG context models for the lossless compression of gray-scale images, and derive new algorithms based on the strengths of each. The context model and binarization tree variations are compared in terms of their speed (the number of binary encodings required per test image) and their compression gain. In this study, the Bostelmann (1974) technique is studied for use at all resolutions, whereas in the arithmetic coded JPEG lossless, the technique is applied only at the 16-bit per pixel resolution.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"52 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132640667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Arithmetic coding for memoryless cost channels","authors":"S. Savari, R. Gallager","doi":"10.1109/DCC.1992.227472","DOIUrl":"https://doi.org/10.1109/DCC.1992.227472","url":null,"abstract":"The authors analyze the expected delay for infinite precision arithmetic codes and suggest a practical implementation that concentrates on the issue of delay.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"30 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131787315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Witten, T. Bell, M. Harrison, Mark L. James, Alistair Moffat
{"title":"Textual image compression","authors":"I. Witten, T. Bell, M. Harrison, Mark L. James, Alistair Moffat","doi":"10.1109/DCC.1992.227477","DOIUrl":"https://doi.org/10.1109/DCC.1992.227477","url":null,"abstract":"The authors describe a method for lossless compression of images that contain predominantly typed or typeset text-they call these textual images. An increasingly popular application is document archiving, where documents are scanned by a computer and stored electronically for later retrieval. Their project was motivated by such an application: Trinity College in Dublin, Ireland, are archiving their 1872 printed library catalogues onto disk, and in order to preserve the exact form of the original document, pages are being stored as scanned images rather than being converted to text. The test images are taken from this catalogue. These typeset documents have a rather old-fashioned look, and contain a wide variety of symbols from several different typefaces-the five test images used contain text in English, Flemish, Latin and Greek, and include italics and small capitals as well as roman letters. The catalogue also contains Hebrew, Syriac, and Russian text.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123008704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}