Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)最新文献

筛选
英文 中文
Real-time VBR rate control of MPEG video based upon lexicographic bit allocation 基于字典位分配的MPEG视频实时VBR速率控制
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-10-01 DOI: 10.1109/DCC.1999.755687
Dzung T. Hoang
{"title":"Real-time VBR rate control of MPEG video based upon lexicographic bit allocation","authors":"Dzung T. Hoang","doi":"10.1109/DCC.1999.755687","DOIUrl":"https://doi.org/10.1109/DCC.1999.755687","url":null,"abstract":"The MPEG-2 video standard describes a bitstream syntax and a decoder model but leaves many details of the encoding process unspecified, such as encoder bit rate control. The standard defines a hypothetical decoder model, called the video buffering verifier, that can operate in either constant-bit-rate or variable-bit-rate modes. We present a low-complexity algorithm for variable-bit-rate control suitable for low-delay, real-time applications. The algorithm is motivated by recent results in lexicographic bit allocation. The basic algorithm switches between constant-quality and constant-bit-rate modes based on changes in the fullness of the decoding buffer in the video buffering verifier. We show how the algorithm can be applied either to produce a desired quality level or to meet a global bit budget. Simulation results show that the algorithm compares favorably to the optimal lexicographic algorithm.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129712112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An asymptotically optimal data compression algorithm based on an inverted index 基于倒排索引的渐近最优数据压缩算法
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785708
P. Subrahmanya
{"title":"An asymptotically optimal data compression algorithm based on an inverted index","authors":"P. Subrahmanya","doi":"10.1109/DCC.1999.785708","DOIUrl":"https://doi.org/10.1109/DCC.1999.785708","url":null,"abstract":"Summary form only given. An alternate approach to representing a data sequence is to associate with each source letter, the list of locations at which it appears in the data sequence. We present a data compression algorithm based on a generalization of this idea. The algorithm parses the data with respect to a static dictionary of phrases and associates with each phrase in the dictionary a list of locations at which the phrase appears in the parsed data. Each list of locations is then run-length encoded. This collection of run-length encoded lists constitutes the compressed representation of the data. We refer to the collection of lists as an inverted index. While in information retrieval systems, the inverted index is an adjunct to the main database used to speed up searching, we regard it here as a self-contained representation of the database itself. Further, our inverted index does not necessarily list every occurrence of a phrase in the data, only every occurrence in the parsing. This allows us to be asymptotically optimal in terms of compression, though at the cost of a loss in searching efficiency. We discuss this trade-off between compression and searching efficiency. We prove that in terms of compression, this algorithm is asymptotically optimal universally over the class of discrete memoryless sources. We also show that pattern matching can be performed efficiently in the compressed domain. Compressing and storing data in this manner may be useful in applications which require frequent searching of a large but mostly static database.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121992144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Joint image compression and classification with vector quantization and a two dimensional hidden Markov model 基于矢量量化和二维隐马尔可夫模型的联合图像压缩与分类
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755650
Jia Li, R. Gray, R. Olshen
{"title":"Joint image compression and classification with vector quantization and a two dimensional hidden Markov model","authors":"Jia Li, R. Gray, R. Olshen","doi":"10.1109/DCC.1999.755650","DOIUrl":"https://doi.org/10.1109/DCC.1999.755650","url":null,"abstract":"We present an algorithm to achieve good compression and classification for images using vector quantization and a two dimensional hidden Markov model. The feature vectors of image blocks are assumed to be generated by a two dimensional hidden Markov model. We first estimate the parameters of the model, then design a vector quantizer to minimize a weighted sum of compression distortion and classification risk, the latter being defined as the negative of the maximum log likelihood of states and feature vectors. The algorithm is tested on both synthetic data and real image data. The extension to joint progressive compression and classification is discussed.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122076723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Modified Viterbi algorithm for predictive TCQ 预测TCQ的改进Viterbi算法
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785689
T. Ji, W. Stark
{"title":"Modified Viterbi algorithm for predictive TCQ","authors":"T. Ji, W. Stark","doi":"10.1109/DCC.1999.785689","DOIUrl":"https://doi.org/10.1109/DCC.1999.785689","url":null,"abstract":"Summary form only given. A hybrid trellis-tree search algorithm, the H-PTCQ, which has the same storage requirement as PTCQ and, is presented. We assume 2 survivor paths are kept at each state. It is straightforward to extend the algorithm to the cases where n/spl ges/2. Simulation is conducted over 20-second speech samples using DPCM, PTCQ and H-PTCQ. The data sequence is truncated into blocks of 1024 samples. The optimal codebooks for a memoryless Laplacian source are used. Predictor coefficients for the 1st-order and 2nd-order predictors are {0.8456} and {1.3435, -0.5888}, respectively. Simulation results indicate that both PTCQ and H-PTCQ have about 3 dB gain over DPCM. H-PTCQ with 8-state convolutional code has about 0.2 to 0.3 db gain over PTCQ for the same trellis size; H-PTCQ with 256-state convolutional code has 0.05 to 0.1 dB gain over the PTCQ counterpart. Compared with a 2M-state PTCQ, the M-state H-PTCQ has the same computational complexity and uses half of the path memory. Since the performance improvement of an an 8-state PTCQ over a 4-state PTCQ is about 0.4 dB for a similar set of data, the 0.2 to 0.3 dB gain obtained by using H-PTCQ is quite remarkable. Notice that H-PTQ enables a transmitter to adapt performance according to the resource constraints without changing PTCQ receivers. It is also interesting to observe that the 0.1 dB gain of an 8-state TCQ against a 4-state TCQ plus the 0.3 dB gain of H-PTCQ is about the gain of an 8-state PTCQ over a 4-state PTCQ. The results for 256-state quantization also agree with this observation. Therefore, we conclude that most of the gain of a 2M- over M-state PTCQ comes from the better internal TCQ quantizer, and mostly from the better prediction by keeping more paths.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122226668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zerotree coding of wavelet coefficients for image data on arbitrarily shaped support 任意形状支撑上图像数据小波系数的零树编码
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785691
A. Kawanaka, V. Algazi
{"title":"Zerotree coding of wavelet coefficients for image data on arbitrarily shaped support","authors":"A. Kawanaka, V. Algazi","doi":"10.1109/DCC.1999.785691","DOIUrl":"https://doi.org/10.1109/DCC.1999.785691","url":null,"abstract":"[Summary form only given]. A wavelet coding method for arbitrarily shaped image data, applicable to object-oriented coding of moving pictures, and to the efficient representation of texture data in computer graphics is proposed. The wavelet transform of an arbitrarily shaped image is obtained by applying the symmetrical extension technique at region boundaries and keeping the location of the wavelet coefficient. For entropy coding of the wavelet coefficients, the zerotree coding technique is modified to work with arbitrarily shaped regions by treating missing (outside of the decomposed support) coefficients as insignificant and transmitting only those zerotree symbols which are in the decomposed support. The coding performance of the proposed method for several test images that include a person, a teapot and a necklace is compared to a shape-adaptive DCT and an ordinary DCT method applying low pass extrapolation to the DCT block containing the region boundaries. Experiments show that the proposed method has a better coding efficiency compared to SA-DCT and the ordinary DCT method.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129511713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Quantized frame expansions as source-channel codes for erasure channels 作为擦除信道的源信道码的量化帧展开
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755682
Vivek K Goyal, J. Kovacevic, M. Vetterli
{"title":"Quantized frame expansions as source-channel codes for erasure channels","authors":"Vivek K Goyal, J. Kovacevic, M. Vetterli","doi":"10.1109/DCC.1999.755682","DOIUrl":"https://doi.org/10.1109/DCC.1999.755682","url":null,"abstract":"Quantized frame expansions are proposed as a method for generalized multiple description coding, where each quantized coefficient is a description. Whereas previous investigations have revealed the robustness of frame expansions to additive noise and quantization, this represents a new application of frame expansions. The performance of a system based on quantized frame expansions is compared to that of a system with a conventional block channel code. The new system performs well when the number of lost descriptions (erasures on an erasure channel) is hard to predict.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128474968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 113
On entropy-constrained residual vector quantization design 熵约束残差矢量量化设计
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785683
Y. Gong, M. Fan, Chien-Min Huang
{"title":"On entropy-constrained residual vector quantization design","authors":"Y. Gong, M. Fan, Chien-Min Huang","doi":"10.1109/DCC.1999.785683","DOIUrl":"https://doi.org/10.1109/DCC.1999.785683","url":null,"abstract":"Summary form only given. Entropy-constrained residual vector quantization (EC-RVQ) has been shown to be a competitive compression technique. Its design procedure is an iterative process which typically consists of three steps: encoder update, decoder update, and entropy coder update. We propose a new algorithm for the EC-RVQ design. The main features of our algorithm are: (i) in the encoder update step, we propose a variation of the exhaustive search encoder that significantly speeds up encoding at no expense in terms of the rate-distortion performance; (ii) in the decoder update step, we propose a new method that simultaneously updates the codebooks of all stages; the method is to form and solve a certain least square problem and we show that both tasks can be done very efficiently; (iii) the Lagrangian of rate-distortion decreases at every step and thus this guarantees the convergence of the algorithm. We have performed some preliminary numerical experiments to test the proposed algorithm. Both random sources and still images are considered. For random sources, the size of training sequence is 2500 and the vector size is 4. For still images, the training set consists of monochrome images from the USC database and the vector size is 4/spl times/4.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126664659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Codes for data synchronization with timing 用于定时数据同步的代码
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755694
N. Kashyap, D. Neuhoff
{"title":"Codes for data synchronization with timing","authors":"N. Kashyap, D. Neuhoff","doi":"10.1109/DCC.1999.755694","DOIUrl":"https://doi.org/10.1109/DCC.1999.755694","url":null,"abstract":"This paper investigates the design and analysis of data synchronization codes whose decoders have the property that, in addition to reestablishing correct decoding after encoded data is lost or afflicted with errors, they produce the original time index of each decoded data symbol modulo some integer T. The motivation for such data synchronization with timing is that in many situations where data must be encoded, it is not sufficient for the decoder to present a sequence of correct data symbols. Instead, the user also needs to know the position in the original source sequence of the symbols being presented. With this goal in mind, periodic prefix-synchronized (PPS) codes are introduced and analyzed on the basis of their synchronization delay D, rate R, and timing span T. Introduced are two specific PPS designs called natural marker and cascaded codes. A principal result is that when coding binary data with rate R, the largest possible timing span attainable with PPS codes grows exponentially with delay D, with exponent D(1-R). Thus, a large timing span can be attained with little redundancy and moderate values of delay.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125794068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Complexity-distortion tradeoffs in vector matching based on probabilistic partial distance techniques 基于概率部分距离技术的向量匹配中的复杂性-失真权衡
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755689
Krisda Lengwehasatit, Antonio Ortega
{"title":"Complexity-distortion tradeoffs in vector matching based on probabilistic partial distance techniques","authors":"Krisda Lengwehasatit, Antonio Ortega","doi":"10.1109/DCC.1999.755689","DOIUrl":"https://doi.org/10.1109/DCC.1999.755689","url":null,"abstract":"We consider the problem of searching for the best match for an input among a set of vectors, according to some predetermined metric. Examples of this problem include the search for the best match for an input in a VQ encoder and the search for a motion vector in motion estimation-based video coding. We propose an approach that computes a partial distance metric and uses prior probabilistic knowledge of the reliability of the estimate to decide on whether to stop the distance computation. This is achieved with a simple hypothesis testing and the result, an extension of the partial distance technique of Bei and Gray (1985) provides additional computation savings at the cost of a (controllable) loss in matching performance.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114677646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Graceful degradation over packet erasure channels through forward error correction 通过前向纠错,在包擦除通道上进行优雅的降级
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755658
A. Mohr, E. Riskin, R. Ladner
{"title":"Graceful degradation over packet erasure channels through forward error correction","authors":"A. Mohr, E. Riskin, R. Ladner","doi":"10.1109/DCC.1999.755658","DOIUrl":"https://doi.org/10.1109/DCC.1999.755658","url":null,"abstract":"We present an algorithm that assigns unequal amounts of forward error correction to progressive data so as to provide graceful degradation as packet losses increase. We use the SPIHT coder to compress images in this work, but our algorithm can protect any progressive compression scheme. The algorithm can also use almost any function as a model of packet loss conditions. We find that for an exponential packet loss model with a mean of 20% and a total rate of 0.2 bpp, good image quality can be obtained, even when 40% of transmitted packets are lost.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115997614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 105
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信