Proceedings DCC '97. Data Compression Conference最新文献

筛选
英文 中文
An iterative technique for universal lossy compression of individual sequences 单个序列的通用有损压缩迭代技术
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581995
Daniel Manor, M. Feder
{"title":"An iterative technique for universal lossy compression of individual sequences","authors":"Daniel Manor, M. Feder","doi":"10.1109/DCC.1997.581995","DOIUrl":"https://doi.org/10.1109/DCC.1997.581995","url":null,"abstract":"Universal lossy compression of a data sequence can be obtained by fitting to the source sequence a \"simple\" reconstruction sequence that can be encoded efficiently and yet be within a tolerable distortion from the given source sequence. We develop iterative algorithms to find such a reconstruction sequence, for a given source sequence, using different criteria of simplicity for the reconstruction sequence. As a result we obtain a practical universal lossy compression method. The proposed method can be applied to source sequences defined over finite or continuous alphabets. We discuss the relation between our method and quantization techniques like entropy coded vector quantization (ECVQ) and trellis coded quantization (TCQ).","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131174903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
L/sub /spl infin//-constrained high-fidelity image compression via adaptive context modeling 基于自适应上下文建模的L/sub /spl infin//约束高保真图像压缩
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581978
Xiaolin Wu, W. K. Choi, P. Bao
{"title":"L/sub /spl infin//-constrained high-fidelity image compression via adaptive context modeling","authors":"Xiaolin Wu, W. K. Choi, P. Bao","doi":"10.1109/DCC.1997.581978","DOIUrl":"https://doi.org/10.1109/DCC.1997.581978","url":null,"abstract":"We study high-fidelity image compression with a given tight bound on the maximum error magnitude. We propose some practical adaptive context modeling techniques to correct prediction biases caused by quantizing prediction residues, a problem common to the current DPCM like predictive nearly-lossless image coders. By incorporating the proposed techniques into the nearly-lossless version of CALIC, we were able to increase its PSNR by 1 dB or more and/or reduce its bit rate by ten per cent or more. More encouragingly, at bit rates around 1.25 bpp our method obtained competitive PSNR results against the best wavelet coders, while obtaining much smaller maximum error magnitude.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131131968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Adaptive vector quantization-Part I: a unifying structure 自适应矢量量化-第一部分:统一结构
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582094
J. Fowler
{"title":"Adaptive vector quantization-Part I: a unifying structure","authors":"J. Fowler","doi":"10.1109/DCC.1997.582094","DOIUrl":"https://doi.org/10.1109/DCC.1997.582094","url":null,"abstract":"Summary form only given. Although rate-distortion theory establishes optimal coding properties for vector quantization (VQ) of stationary sources, the fact that real sources are, in actuality, nonstationary has led to the proposal of adaptive-VQ (AVQ) algorithms that compensate for changing source statistics. Because of the scarcity of rate-distortion results for nonstationary sources, proposed AVQ algorithms have been mostly heuristically, rather than analytically, motivated. As a result, there has been, to date, little attempt to develop a general model of AVQ or to compare the performance associated with existing AVQ algorithms. We summarize observations resulting from detailed studies of a number of previously published AVQ algorithms. To our knowledge, the observations represent the first attempt to define and describe AVQ in a general framework. We begin by proposing a mathematical definition of AVQ. Because of the large variety of algorithms that have purported to be AVQ, it is unclear from prior literature precisely what is meant by this term. Any resulting confusion is likely due to a certain imprecise, and sometimes ambiguous, use of the word \"adaptive\" in VQ literature. However, common to a large part of this literature is the notion that AVQ properly refers to techniques that dynamically vary the contents of a VQ codebook as coding progresses. Our definition of AVQ captures this idea of progressive codebook updating in a general mathematical framework.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132748021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fast residue coding for lossless textual image compression 快速残余编码无损文本图像压缩
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582065
C. Constantinescu, R. Arps
{"title":"Fast residue coding for lossless textual image compression","authors":"C. Constantinescu, R. Arps","doi":"10.1109/DCC.1997.582065","DOIUrl":"https://doi.org/10.1109/DCC.1997.582065","url":null,"abstract":"Lossless textual image compression based on pattern matching classically includes a \"residue\" coding step that refines an initially lossy reconstructed image to its lossless original form. This step is typically accomplished by arithmetically coding the predicted value for each lossless image pixel, based on the values of previously reconstructed nearby pixels in both the lossless image and its precursor lossy image. Our contribution describes background typical prediction (TPR-B), a fast method for residue coding based on \"typical prediction\" which permits the skipping of pixels to be arithmetically encoded; and non-symbol typical prediction (TPR-NS), an improved compression method for residue coding also based on \"typical prediction\". Experimental results are reported based on the residue coding method proposed in Howard's (see Proc. of '96 Data Compression Conf., Snowbird, Utah, p.210-19, 1996) SPM algorithm and the lossy images it generates when applied to eight CCITT bi-level test images. These results demonstrate that after lossy image coding, 88% of the lossless image pixels in the test set can be predicted using TPR-B and need not be residue coded at all. In terms of saved SPM arithmetic coding operations while residue coding, TPR-B achieves an average coding speed increase of 8 times. Using TPR-NS together with TPR-B increases the SPM residue coding compression ratios by an average of 11%.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124385847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Low bit rate color image coding with adaptive encoding of wavelet coefficients 小波系数自适应编码的低码率彩色图像编码
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582118
S. Meadows, S. Mitra
{"title":"Low bit rate color image coding with adaptive encoding of wavelet coefficients","authors":"S. Meadows, S. Mitra","doi":"10.1109/DCC.1997.582118","DOIUrl":"https://doi.org/10.1109/DCC.1997.582118","url":null,"abstract":"We report the performance of the embedded zerotree wavelet (EZW) using successive-approximation quantization and an adaptive arithmetic coding for effective reduction in bit rates while maintaining high visual quality of reconstructed color images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp by EZW yielding a compression ratio (CR) of 50:1. Further bit rate reduction to 0.375 bpp results in a visible degradation by EZW, as is the case when using the adaptive vector quantizer AFLC-VQ. However, the bit rate reduction by AFLC-VQ was computed from the quantizer output and did not include any subsequent entropy coding. Therefore entropy coding of the multi-resolution codebooks generated by adaptive vector quantization of the wavelet coefficients in the AFLC-VQ scheme should reduce the bit rate to at least 0.36 bpp (CR 67:1) at the desired quality currently obtainable at 0.48 bpp by EZW.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123506803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An overhead reduction technique for mega-state compression schemes 一种用于大状态压缩方案的开销减少技术
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582061
A. Bookstein, S. T. Klein, T. Raita
{"title":"An overhead reduction technique for mega-state compression schemes","authors":"A. Bookstein, S. T. Klein, T. Raita","doi":"10.1109/DCC.1997.582061","DOIUrl":"https://doi.org/10.1109/DCC.1997.582061","url":null,"abstract":"Many of the most effective compression methods involve complicated models. Unfortunately, as model complexity increases, so does the cost of storing the model itself. This paper examines a method to reduce the amount of storage needed to represent a Markov model with an extended alphabet, by applying a clustering scheme that brings together similar states. Experiments run on a variety of large natural language texts show that much of the overhead of storing the model can be saved at the cost of a very small loss of compression efficiency.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129909918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Compression of generalised Gaussian sources 广义高斯源的压缩
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582131
A. Puga, A. P. Alves
{"title":"Compression of generalised Gaussian sources","authors":"A. Puga, A. P. Alves","doi":"10.1109/DCC.1997.582131","DOIUrl":"https://doi.org/10.1109/DCC.1997.582131","url":null,"abstract":"Summary form only given. This article introduces a non-linear statistical approach to the interframe video coding assuming a priori that the source is non-Gaussian. To this end generalised Gaussian (GG) modelling and high-order statistics are used and a new optimal coding problem is identified as a simultaneous diagonalisation of 2nd and 4th order cumulant tensors. This problem, named the high-order Karhunen-Loeve transform (HOKLT), is an independent component analysis (ICA) method. Using the available linear techniques for cumulant tensor diagonalisation the HOKLT problem cannot be, in general, solved exactly. Considering the impossibility of solving HOKLT problem within the linear group, a non-linear methodology named non-linear independent components analysis (NLICA) that solves the HOKLT problem was introduced. The structure of the analysis operator produced by NLICA is a linear-nonlinear-linear transformation where the first linear stage is an isoentropic ICA operator and the last linear stage is a principal components analysis (PCA) operator. The non-linear stage is diagonal and it converts marginal densities to Gaussianity conserving marginal entropies. Considering the three basic coding modes within DPCM video coders and the three colour components there are nine different sources. Fitting this sources to GG family, done in this work, has shown how far from Gaussianity these sources are and supports the GG modelling effectiveness.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1994 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125550350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Perceptually lossless image compression 感知无损图像压缩
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582100
Peter J. Hahn, V., John Mathews
{"title":"Perceptually lossless image compression","authors":"Peter J. Hahn, V., John Mathews","doi":"10.1109/DCC.1997.582100","DOIUrl":"https://doi.org/10.1109/DCC.1997.582100","url":null,"abstract":"Summary form only given. This paper presents an algorithm for perceptually lossless image compression. The approach utilizes properties of the human visual system in the form of a perceptual threshold function (PTF) model. The PTF model determines the amount of distortion that can be introduced at each location of the image. Thus, constraining all quantization errors to levels below the PTF results in perceptually lossless image compression. The system employs a modified form of the embedded zerotree wavelet (EZW) coding algorithm that limits the quantization errors of the wavelet transform coefficients to levels below those specified by the model of the perceptual threshold function. Experimental results demonstrate perceptually lossless compression of monochrome images at bit rates ranging from 0.4 to 1.2 bits per pixel at a viewing distance of six times the image height and at bit rates from 0.2 to 0.5 bits per pixel at a viewing distance of twelve times the image height.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126371180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Effective management of compressed data with packed file systems 有效地管理压缩数据与打包文件系统
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582127
Y. Okada, M. Tokuyo, S. Yoshida, N. Okayasu, H. Shimoi
{"title":"Effective management of compressed data with packed file systems","authors":"Y. Okada, M. Tokuyo, S. Yoshida, N. Okayasu, H. Shimoi","doi":"10.1109/DCC.1997.582127","DOIUrl":"https://doi.org/10.1109/DCC.1997.582127","url":null,"abstract":"Summary form only given. Lossless data compression is commonly used on personal computers to increase their storage capacity. For example, we can get twice the normal capacity by using lossless data compression algorithms. However, it is necessary to locate compressed data of variable sizes in a fixed-size block with as little fragmentation as possible. This can be accomplished by compressed data management (CDM). The amount of storage capacity provided by data compression depends on the ability of CDM. If CDM does not eliminate fragmentation sufficiently, one cannot attain the storage capacity corresponding to the compression ratio. We present an efficient CDM using a new packed file system (PFS). We confirmed that the PFS achieves and maintains 95% of high space efficiency by using only 1/1000 of the table size needed for the entire storage capacity without employing garbage collection.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116219927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An embedded wavelet video coder using three-dimensional set partitioning in hierarchical trees (SPIHT) 基于层次树三维集划分的嵌入式小波视频编码器
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582048
Beong-Jo Kim, W. Pearlman
{"title":"An embedded wavelet video coder using three-dimensional set partitioning in hierarchical trees (SPIHT)","authors":"Beong-Jo Kim, W. Pearlman","doi":"10.1109/DCC.1997.582048","DOIUrl":"https://doi.org/10.1109/DCC.1997.582048","url":null,"abstract":"The SPIHT (set partitioning in hierarchical trees) algorithm by Said and Pearlman (see IEEE Trans. on Circuits and Systems for Video Technology, no.6, p.243-250, 1996) is known to have produced some of the best results in still image coding. It is a fully embedded wavelet coding algorithm with precise rate control and low complexity. We present an application of the SPIHT algorithm to video sequences, using three-dimensional (3D) wavelet decompositions and 3D spatio-temporal dependence trees. A full 3D-SPIHT encoder/decoder is implemented in software and is compared against MPEG-2 in parallel simulations. Although there is no motion estimation or compensation in the 3D SPIHT, it performs measurably and visually better than MPEG-2, which employs complicated motion estimation and compensation.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127311303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 361
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信