Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)最新文献

筛选
英文 中文
Compression of sparse matrices by arithmetic coding 稀疏矩阵的算术编码压缩
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672126
T. Bell, B. McKenzie
{"title":"Compression of sparse matrices by arithmetic coding","authors":"T. Bell, B. McKenzie","doi":"10.1109/DCC.1998.672126","DOIUrl":"https://doi.org/10.1109/DCC.1998.672126","url":null,"abstract":"The compression of matrices where the majority of the entries are a fixed constant (most typically zero), usually referred to as sparse matrices, has received much attention. We evaluate the performance of existing methods, and consider how arithmetic coding can be applied to the problem to achieve better compression. The result is a method that gives better compression than existing methods, and still allows constant-time access to individual elements if required. Although for concreteness we express our method in terms of two-dimensional matrices where the majority of the values are zero, it is equally applicable to matrices of any number of dimensions and where the fixed known constant is any value. We assume that the number of dimensions and their ranges are known, but will not assume that any information is available externally regarding the number of non-zero entries.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131301800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
On accelerating fractal compression 加速分形压缩
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672256
Hsueh-Ting Chu, Chaur-Chin Chen
{"title":"On accelerating fractal compression","authors":"Hsueh-Ting Chu, Chaur-Chin Chen","doi":"10.1109/DCC.1998.672256","DOIUrl":"https://doi.org/10.1109/DCC.1998.672256","url":null,"abstract":"Summary form only given. Image data compression by fractal techniques has been widely investigated. Although its high compression ratio and resolution-free decoding properties are attractive, the encoding process is computationally demanding in order to achieve an optimal compression. This article proposes a fast fractal-based encoding algorithm (ACC) by using the intensity changes of neighboring pixels to search for a suboptimal domain block for a given range block. Experimental results show that our algorithm achieves close to the optimal algorithm (OPT) for 256/spl times/256 images Jet, Lenna, Mandrill, and Peppers, with a compression ratio of 16. A comparison of the performance of algorithms OPT and ACC on a Sun Ultra 1 Sparc workstation is given.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131543275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast convergence with a greedy tag-phrase dictionary 快速收敛与贪婪标签短语字典
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672128
T. Smith, Ross Peeters
{"title":"Fast convergence with a greedy tag-phrase dictionary","authors":"T. Smith, Ross Peeters","doi":"10.1109/DCC.1998.672128","DOIUrl":"https://doi.org/10.1109/DCC.1998.672128","url":null,"abstract":"Lexical categories have been shown to assist in giving good compression results when incorporated into context models. This paper describes a greedy dictionary-based model that maintains a dictionary of tag-phrases, along with separate lexicons for each unique tag. The text is tagged with part-of-speech (POS) labels and then given to the encoder, which uses the tags to construct the phrase dictionary in a manner similar to LZ78. The output is a sequence of arithmetically encoded phrase number coupled with the information needed to match the correct word with each tag in the phrase. Each unique word (defined as each novel word/tag pair) is transmitted once when it is first encountered, then retained in the appropriate dictionary and thereafter arithmetically encoded according to the empirical distribution for that dictionary whenever the word is encountered. We present results from some empirical tests showing that this \"tag-phrase dictionary\" technique achieves nearly identical compression as that obtainable using PPM, an explicit-context model. This goes against the widely held view that greedy dictionary schemes require much larger samples of text before they can compete with statistical context methods. Some interesting theoretical issues pertaining to text compression in general are implied, and these are also discussed.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121222678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Complexity of preprocessor in MPM data compression system MPM数据压缩系统中预处理器的复杂性
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672292
J. Kieffer, E. Yang, T. Park, S. Yakowitz
{"title":"Complexity of preprocessor in MPM data compression system","authors":"J. Kieffer, E. Yang, T. Park, S. Yakowitz","doi":"10.1109/DCC.1998.672292","DOIUrl":"https://doi.org/10.1109/DCC.1998.672292","url":null,"abstract":"Summary form only given. The multilevel pattern matching data compression system is one of a class of compression algorithms introduced by Kieffer and Yang (see ERA Amer. Math. Soc., vol.3, p.11-16, 1997). The MPM system is currently of interest because of its good redundancy performance in losslessly compressing data strings of arbitrary length over a finite alphabet. An MPM system consists of a preprocessor, encoder/decoder, and a reconstruction engine. The preprocessor detects matching patterns in the input data string (substrings of the data appearing in two or more nonoverlapping positions). The preprocessor operates at several levels sequentially, with the number of levels selected by the user. The matching patterns detected at each level are of a fixed length which decreases by a constant factor from level to level, until this fixed length becomes one at the final level. The preprocessor represents information about matching patterns at each level as a string of tokens which is passed to the encoder of the MPM system. The decoder of the MPM system recovers these token strings, from which the reconstruction engine rebuilds the input data string. The preprocessor is the most complex component of the MPM system. We exhibit an implementation of the preprocessor of linear complexity in terms of execution time and space requirements; the number of levels satisfies O(log/sub 2/log/sub 2/n) for input data strings of length n.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116358536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reversible variable length codes for efficient and robust image and video coding 可逆可变长度代码的高效和鲁棒的图像和视频编码
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672209
Jiangtao Wen, J. Villasenor
{"title":"Reversible variable length codes for efficient and robust image and video coding","authors":"Jiangtao Wen, J. Villasenor","doi":"10.1109/DCC.1998.672209","DOIUrl":"https://doi.org/10.1109/DCC.1998.672209","url":null,"abstract":"The International Telecommunications Union (ITU) has adopted reversible variable length codes (RVLCs) for use in the emerging H.263+ video compression standard. As the name suggests, these codes can be decoded in two directions and can therefore be used by a decoder to enhance robustness in the presence of transmission bit errors. In addition, these RVLCs involve little or no efficiency loss relative to the corresponding non-reversible variable length codes. We present the ideas behind two general classes of RVLCs and discuss the results of applying these codes in the framework of the H.263+ and MPEG-4 video coding standards.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128296113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 108
Predictive fractal image coding: hybrid algorithms and compression of residuals 预测分形图像编码:混合算法和残差压缩
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672277
Thomas Freina, A. Uhl
{"title":"Predictive fractal image coding: hybrid algorithms and compression of residuals","authors":"Thomas Freina, A. Uhl","doi":"10.1109/DCC.1998.672277","DOIUrl":"https://doi.org/10.1109/DCC.1998.672277","url":null,"abstract":"Summary form only given. The authors introduce hybrid algorithms which consist of a fractal predictor in the spatial domain with subsequent coding of the residual image (error-image between the fractal prediction and the image to compress). For coding the residual either wavelet (based on the SPIHT coder) or DCT based coding (as used in interframe compression, e.g. for B or P frames in H.261, MPEG-1,2) is employed. Additionally they contribute to the discussion about the performance of wavelet and DCT based algorithms for the compression of motion compensated error frames in interframe video coding algorithms since the residual images considered in the proposed hybrid algorithms exhibit similar (or even identical) statistical properties as motion compensated error frames.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"302 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125844145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Reversing the error-correction scheme for a fault-tolerant indexing 为容错索引反转纠错方案
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672237
S. Berkovich, E. El-Qawasmeh
{"title":"Reversing the error-correction scheme for a fault-tolerant indexing","authors":"S. Berkovich, E. El-Qawasmeh","doi":"10.1109/DCC.1998.672237","DOIUrl":"https://doi.org/10.1109/DCC.1998.672237","url":null,"abstract":"Summary form only given. The article presents an innovative approach to approximate matching of multi-attribute objects based on reversing the conventional scheme of error-correction coding. The approximate matching problem primarily arises in information retrieval systems, which can store fuzzily described items and operate with nebulous searching criteria. To establish an approximate equivalence relation on a set of multi-attribute objects it has been suggested to apply a decoding procedure to binary vectors corresponding to these objects and to use the obtained message words as hash codes. With this hashing technique it is possible to construct \"fault-tolerant\" indices allowing certain mismatches of binary vectors in terms of Hamming metrics. The simplest practical realization of this technique is based on the so-called perfect Golay code which maps 23-bit vectors into 12-bit message words. In this case, two different 23-bit vectors at a Hamming distance of 2 would have some common 12-bit indices. This provides an organization of a direct retrieval of a neighborhood of 23 bit-vectors with up to two mismatches from a given key. The proposed technique employs a reasonable redundancy and can trade utilization of extra memory for the speed and range of searching. Besides a direct application to information retrieval, the developed technique is also beneficial for complex computational procedures incorporating near-matching operations. A typical procedure of this kind is recovering of closed matches from vector-quantization tables.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116822214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Hybrid image compression scheme based on wavelet transform and adaptive context modeling 基于小波变换和自适应上下文建模的混合图像压缩方案
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672235
P. Bao, Xiaolin Wu
{"title":"Hybrid image compression scheme based on wavelet transform and adaptive context modeling","authors":"P. Bao, Xiaolin Wu","doi":"10.1109/DCC.1998.672235","DOIUrl":"https://doi.org/10.1109/DCC.1998.672235","url":null,"abstract":"Summary form only given. We propose a hybrid image compression scheme based on wavelet transform, HVS thresholding and L/sub /spl infin//-constrained adaptive context modelling. This hybrid system combines the strengths of the wavelet transform, the HVS thresholding and the adaptive context modelling to result in a near optimal compression scheme. The wavelet transform is very powerful in localizing the global spatial and frequency correlation. The HVS model-based thresholding is designed to exploit and eliminate the wavelet coefficients insensitive to the human visual system. The context-based modelling is superior in decorrelating the local redundancy. In the scheme, the image is first decomposed into the multiresolution subimages using the orthogonal wavelet transform; each subimage corresponds to a octave band in the wavelet decomposition. The coefficients in the high-pass octave bands of the wavelet transform are then quantized through HVS frequency- and spatial model-based thresholding and vector quantization into wavelet decomposition with only significant coefficients to the HVS retained. In this HVS quantized wavelet decomposition, the coefficients insignificant to the human visual system are normalized to zero and the global spatial and frequency correlation are exploited and removed. Then the quantized subimages in the low-pass band and the remaining high-pass octave bands of each octave level are processed using the L/sub /spl infin//-constrained CALIC to de-correlate the local redundancy. It is demonstrated that the hybrid scheme is one of the best compression schemes in achieving the excellent compression rates and competitive PSNR while maintaining a small visual distortion. In comparing with the original CALIC, we were able to increase the PSNR by 0.65 dB or more and obtain bit rates 15 percent lower than the latter. We were also able to obtain competitive PSNR results against the best wavelet coders, while maintaining a smaller visual distortion. In particular, the wavelet CALIC was able to obtain 1.34 to 7.84 dB higher PSNR on the standard ISO test benchmarks than the SPIHT, one of the best wavelet coder.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127062962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AudioPaK-an integer arithmetic lossless audio codec audiopak -一个整数算术无损音频编解码器
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672286
M. Hans, R. Schafer
{"title":"AudioPaK-an integer arithmetic lossless audio codec","authors":"M. Hans, R. Schafer","doi":"10.1109/DCC.1998.672286","DOIUrl":"https://doi.org/10.1109/DCC.1998.672286","url":null,"abstract":"We designed a simple, lossless audio codec, called AudioPaK, which uses only a small number of integer arithmetic operations on both the coder and the decoder side. The main operations of this codec are polynomial prediction and Golomb-Rice coding, and are done on a frame basis. Our coder performs as well, or even better than most lossless audio codecs.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125676507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Weighting of double exponential distributed data in lossless image compression 无损图像压缩中双指数分布数据的加权
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672268
N. Ekstrand, B. Smeets
{"title":"Weighting of double exponential distributed data in lossless image compression","authors":"N. Ekstrand, B. Smeets","doi":"10.1109/DCC.1998.672268","DOIUrl":"https://doi.org/10.1109/DCC.1998.672268","url":null,"abstract":"Summary form only given. State-of-the-art lossless image compression schemes use a prediction scheme, a context model and an arithmetic encoder. The discrepancy between the predicted value and the actual value is regarded to be double exponentially distributed. The BT/CARPscheme was considered in Weinberger et al. (1996) as a means to find limits in lossless image compression. The scheme uses the context-algorithm (Rissanen 1983) which is, in terms of redundancy, an asymptotically optimal tree-algorithm. Further, BT/CARP uses extended tree nodes which contain a linear prediction scheme and a model for the double exponentially distributed data (DE-data). The model parameters are estimated and from the corresponding distribution the symbol probability distribution can be calculated. The drawback of the parameter estimating technique is its poor performance for short sequences. In order to improve the BT/CARP-scheme we have exchanged the estimation techniques with probability assignment techniques: the CTW-algorithm (Williams et al. 1995) and our weighting method for DE-data. We conclude that the suggested probability assignment technique has a favorable effect on the compression performance when compared with the traditional estimation techniques. On a test-image set the assumed improvement was verified.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132810405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信