{"title":"Parallel algorithms for the static dictionary compression","authors":"H. Nagumo, Mi Lu, K. Watson","doi":"10.1109/DCC.1995.515506","DOIUrl":"https://doi.org/10.1109/DCC.1995.515506","url":null,"abstract":"Studies parallel algorithms for two static dictionary compression strategies. One is the optimal dictionary compression with dictionaries that have the prefix property, for which our algorithm requires O(L+log n) time and O(n) processors, where L is the maximum allowable length of the dictionary entries, while previous results run in O(L+log n) time using O(n/sup 2/) processors, or in O(L+log/sup 2/n) time using O(n) processors. The other algorithm is the longest-fragment-first (LFF) dictionary compression, for which our algorithm requires O(L+log n) time and O(nL) processors, while the previous result has O(L log n) time performance on O(n/log n) processors. We also show that the sequential LFF dictionary compression can be computed online with a lookahead of length O(L/sup 2/).","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128859471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subband coding methods for seismic data compression","authors":"A. Kiely, F. Pollara","doi":"10.1109/DCC.1995.515557","DOIUrl":"https://doi.org/10.1109/DCC.1995.515557","url":null,"abstract":"Summary form only given. A typical seismic analysis scenario involves collection of data by an array of seismometers, transmission over a channel offering limited data rate, and storage of data for analysis. Seismic data analysis is performed for monitoring earthquakes and for planetary exploration as in the planned study of seismic events on Mars. Seismic data compression systems are required to cope with the transmission of vast amounts of data over constrained channels and must be able to accurately reproduce occasional high energy seismic events. We propose a compression algorithm that includes three stages: a decorrelation stage based on subband coding, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient block-adaptive arithmetic coding method. Adaptivity to the non-stationary behavior of the waveform is achieved by partitioning the data into blocks which are encoded separately. The compression ratio of the proposed scheme can be set to meet prescribed fidelity requirements, i.e. the waveform can be reproduced with sufficient fidelity for accurate interpretation and analysis. The distortions incurred by this compression scheme are currently being evaluated by several seismologists. Encoding is done with high efficiency due to the low overhead required to specify the parameters of the arithmetic encoder. Rate-distortion performance results on seismic waveforms are presented for various filter banks and numbers of levels of decomposition.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114896874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An investigation of effective compression ratios for the proposed synchronous data compression proto","authors":"R. R. Little","doi":"10.1109/DCC.1995.515597","DOIUrl":"https://doi.org/10.1109/DCC.1995.515597","url":null,"abstract":"The Telecommunications Industry Association (TIA) Technical Committee TR-30 ad hoc Committee on Compression of Synchronous Data for DSUs has submitted three documents to TR30.1 as contributions which specify a standard data compression protocol. The proposed standard uses the Point-to-Point Protocol developed by the Internet Engineering Task Force (IETF) with certain extensions. Following a time for comment, the ad hoc committee planned to submit the draft standard document to TR30.1 for ballot at the January 30, 1995, meeting with balloting expected to be completed in May.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123894296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comparison of the Z, E/sub 8/ and Leech lattices for image subband quantization","authors":"Zheng Gao, Feng Chen, B. Belzer, J. Villasenor","doi":"10.1109/DCC.1995.515521","DOIUrl":"https://doi.org/10.1109/DCC.1995.515521","url":null,"abstract":"Lattice vector quantization schemes offer high coding efficiency without the burden associated with generating and searching a codebook. The distortion associated with a given lattice is often expressed in terms of the G number, which is a measure of the mean square error per dimension generated by quantization of a uniform source. Subband image coefficients, however, are best modeled by a generalized Gaussian, leading to distortion characteristics that are quite different from those encountered for uniform, Laplacian, or Gaussian sources. We present here the distortion associated with Z, E/sub 8/, and Leech lattice quantization for coding of generalized Gaussian sources, and show that for low bit rates the Z lattice offers both the best performance and the lowest implementational complexity.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127463324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vector quantization and clustering: a pyramid approach","authors":"D. Tamir, Chi-Yeon Park, Wook-Sung Yoo","doi":"10.1109/DCC.1995.515592","DOIUrl":"https://doi.org/10.1109/DCC.1995.515592","url":null,"abstract":"A multi-resolution K-means clustering method is presented. Starting with a low resolution sample of the input data the K-means algorithm is applied to a sequence of monotonically increasing-resolution samples of the given data. The cluster centers obtained from a low resolution stage are used as initial cluster centers for the next stage which is a higher resolution stage. The idea behind this method is that a good estimation of the initial location of the cluster centers can be obtained through K-means clustering of a sample of the input data. K-means clustering of the entire data with the initial cluster centers estimated by clustering a sample of the input data, reduces the convergence time of the algorithm.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127530741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Matching pursuit video coding at very low bit rates","authors":"Ralph Neff, A. Zakhor","doi":"10.1109/DCC.1995.515531","DOIUrl":"https://doi.org/10.1109/DCC.1995.515531","url":null,"abstract":"Matching pursuits refers to a greedy algorithm which matches structures in a signal to a large dictionary of functions. In this paper, we present a matching-pursuit based video coding system which codes motion residual images using a large dictionary of Gabor functions. One feature of our system is that bits are assigned progressively to the highest-energy areas in the motion residual image. The large dictionary size is another advantage, since it allows structures in the motion residual to be represented using few significant coefficients. Experimental results compare the performance of the matching-pursuit system to a hybrid-DCT system at various bit rates between 6 and 128 kbit/s. Additional experiments show how the matching pursuit system performs if the Gabor dictionary is replaced by an 8/spl times/8 DCT dictionary.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126964880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time VLSI compression for high-speed wireless local area networks","authors":"Bongjin Jung, W. Burleson","doi":"10.1109/DCC.1995.515541","DOIUrl":"https://doi.org/10.1109/DCC.1995.515541","url":null,"abstract":"Summary form only presented; substantially as follows. Presents a new compact, power-efficient, and scalable VLSI array for the first Lempel-Ziv (LZ) algorithm to be used in high-speed wireless data communication systems. Lossless data compression can be used to inexpensively halve the amount of data to be transmitted, thus improving the effective bandwidth of the communication channel and in turn, the overall network performance. For wireless networks, the data rate and latency requirement are appropriate for a dedicated VLSI implementation of LZ compression. The nature of wireless networks requires that any additional VLSI hardware also be small, low-power and inexpensive. The architecture uses a novel custom systolic array and a simple dictionary FIFO which is implemented using conventional SRAM. The architecture consists of M simple processing elements where M is the maximum length of the string to be replaced with a codeword, which for practical LAN applications, can range from 16 to 32. The systolic cell has been optimized to remove any superfluous state information or logic, thus making it completely dedicated to the task of LZ compression. A prototype chip has been implemented using 2 /spl mu/s CMOS technology. Using M=32, and assuming a 2:1 compression ratio, the system can process approximately 90 Mbps with a 100 MHz clock rate.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132117354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lossless region coding schemes","authors":"M. Turner","doi":"10.1109/DCC.1995.515591","DOIUrl":"https://doi.org/10.1109/DCC.1995.515591","url":null,"abstract":"Summary form only given. The use of describing regions as separate entities within an image has been applied within specific fields of image compression for many years. This study hopes to show that the technique, when applied with care, is practical for virtually all image types. Three different schemes for segmenting and coding an image have been considered: array covering; region numbering; and edge following.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132825522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convergence of fractal encoded images","authors":"J. Kominek","doi":"10.1109/DCC.1995.515514","DOIUrl":"https://doi.org/10.1109/DCC.1995.515514","url":null,"abstract":"Fractal image compression, despite its great potential, suffers from some flaws that may prevent its adaptation from becoming more widespread. One such problem is the difficulty of guaranteeing convergence, let alone a specific error tolerance. To help surmount this problem, we have introduced the terms compound, cycle, and partial contractivity concepts indispensable for understanding convergence of fractal images. Most important, they connect the behavior of individual pixels to the image as a whole, and relate such behavior to the component affine transforms.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130970659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantization distortion in block transform-compressed data","authors":"A. Boden","doi":"10.1109/DCC.1995.515537","DOIUrl":"https://doi.org/10.1109/DCC.1995.515537","url":null,"abstract":"Summary form only given, as follows. The JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into blocks that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. Block transform compression schemes exhibit sharp discontinuities at data block boundaries: this phenomenon is a visible manifestation of the compression quantization distortion. For example, in compression algorithms such as JPEG these blocking effects manifest themselves visually as discontinuities between adjacent 8×8 pixel image blocks. In general the distortion characteristics of block transform-based compression techniques are understandable in terms of the properties of the transform basis functions and the transform coefficient quantization error. In particular, the blocking effects exhibited by JPEG are explained by two simple observations demonstrated in this work: a disproportionate fraction of the total quantization error accumulates on block edge pixels; and the quantization errors among pixels within a compression block are highly correlated, while the quantization errors between pixels in separate blocks are uncorrelated. A generic model of block transform compression quantization noise is introduced, applied to synthesized and real one and two dimensional data using the DCT as the transform basis, and results of the model are shown to predict distortion patterns observed in data compressed with JPEG.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"21 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123873608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}