Proceedings DCC 2002. Data Compression Conference最新文献

筛选
英文 中文
Combining FEC and optimal soft-input source decoding for the reliable transmission of correlated variable-length encoded signals 将FEC与最优软输入源解码相结合,实现相关变长编码信号的可靠传输
Proceedings DCC 2002. Data Compression Conference Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999946
J. Kliewer, R. Thobaben
{"title":"Combining FEC and optimal soft-input source decoding for the reliable transmission of correlated variable-length encoded signals","authors":"J. Kliewer, R. Thobaben","doi":"10.1109/DCC.2002.999946","DOIUrl":"https://doi.org/10.1109/DCC.2002.999946","url":null,"abstract":"We utilize both the implicit residual source correlation and the explicit redundancy from a forward error correction (FEC) scheme for the error protection of packetized variable-length encoded source indices. The implicit source correlation is exploited in a novel symbol-based soft-input a-posteriori probability (APP) decoder, which leads to an optimal decoding process in combination with a mean-squares or maximum a-posteriori probability estimation of the reconstructed source signal. When, additionally, the variable-length encoded source data is protected by channel codes, an iterative source-channel decoder can be obtained in the same way as for serially concatenated codes, where the outer constituent decoder is replaced by the proposed APP source decoder. Simulation results show that, by additionally considering the correlations between the variable-length encoded source indices, the error-correction performance can be highly increased.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124881304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Progressive coding of palette images and digital maps 调色板图像和数字地图的渐进编码
Proceedings DCC 2002. Data Compression Conference Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999974
S. Forchhammer, J. M. Salinas
{"title":"Progressive coding of palette images and digital maps","authors":"S. Forchhammer, J. M. Salinas","doi":"10.1109/DCC.2002.999974","DOIUrl":"https://doi.org/10.1109/DCC.2002.999974","url":null,"abstract":"A 2D version of PPM (Prediction by Partial Matching) coding is introduced simply by combining a 2D template with the standard PPM coding scheme. A simple scheme for resolution reduction is given and the 2D PPM scheme extended to resolution progressive coding by placing pixels in a lower resolution image layer. The resolution is increased by a factor of 2 in each step. The 2D PPM coding is applied to palette images and street maps. The sequential results are comparable to PWC. The PPM results are a little better for the palette images with few colors (up to 4-5 bpp) and a little worse for the images with more colors. For street maps the 2D PPM is slightly better. The PPM based resolution progressive coding provides a better result than coding the resolution layers as individual images. Compared to GIF the resolution progressive 2D PPM's coding efficiency is significantly better. An example of combined content-layer/spatial progressive coding is also given.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129361277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Perceptual preprocessing techniques applied to video compression: some result elements and analysis 应用于视频压缩的感知预处理技术:一些结果元素及分析
Proceedings DCC 2002. Data Compression Conference Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.1000006
Gwenaelle Marquant
{"title":"Perceptual preprocessing techniques applied to video compression: some result elements and analysis","authors":"Gwenaelle Marquant","doi":"10.1109/DCC.2002.1000006","DOIUrl":"https://doi.org/10.1109/DCC.2002.1000006","url":null,"abstract":"Summary form only given. The developments in video coding research deal with solutions to improve the picture quality while decreasing the bit rates. However, no major breakthrough in compression emerged and low bit rate high quality video compression is still an open issue. The compression scheme is generally decomposed into two stages: coding and decoding. In order to improve the compression efficiency, a complementary solution may consist in introducing a preprocessing stage before the encoding process or/and a post-processing step after decoding. For this purpose, instead of using the usual (Y, U, V) representation space to compress the video signal, where the video is encoded along different separate channels (luminance Y, chrominance U, chrominance V), we propose to choose other channels by means of a color preprocessing based upon perceptual and physics-based approaches. We compare an original H.26L encoder (ITU standard for video coding), i.e. without preprocessing, and the same H.26L encoder with a preprocessing stage to evaluate the extent to which the preprocessing stage increases the compression efficiency, in particular with perceptual solutions.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124692460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A source coding approach to classification by vector quantization and the principle of minimum description length 一种基于矢量量化和最小描述长度原则的源编码分类方法
Proceedings DCC 2002. Data Compression Conference Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999978
Jia Li
{"title":"A source coding approach to classification by vector quantization and the principle of minimum description length","authors":"Jia Li","doi":"10.1109/DCC.2002.999978","DOIUrl":"https://doi.org/10.1109/DCC.2002.999978","url":null,"abstract":"An algorithm for supervised classification using vector quantization and entropy coding is presented. The classification rule is formed from a set of training data {(X/sub i/, Y/sub i/)}/sub i=1//sup n/, which are independent samples from a joint distribution P/sub XY/. Based on the principle of minimum description length (MDL), a statistical model that approximates the distribution P/sub XY/ ought to enable efficient coding of X and Y. On the other hand, we expect a system that encodes (X, Y) efficiently to provide ample information on the distribution P/sub XY/. This information can then be used to classify X, i.e., to predict the corresponding Y based on X. To encode both X and Y, a two-stage vector quantizer is applied to X and a Huffman code is formed for Y conditioned on each quantized value of X. The optimization of the encoder is equivalent to the design of a vector quantizer with an objective function reflecting the joint penalty of quantization error and misclassification rate. This vector quantizer provides an estimation of the conditional distribution of Y given X, which in turn yields an approximation to the Bayes classification rule. This algorithm, namely discriminant vector quantization (DVQ), is compared with learning vector quantization (LVQ) and CART/sup R/ on a number of data sets. DVQ outperforms the other two on several data sets. The relation between DVQ, density estimation, and regression is also discussed.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130321983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Compressor performance, absolutely! 压缩机性能,绝对!
Proceedings DCC 2002. Data Compression Conference Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.1000017
M. Titchener
{"title":"Compressor performance, absolutely!","authors":"M. Titchener","doi":"10.1109/DCC.2002.1000017","DOIUrl":"https://doi.org/10.1109/DCC.2002.1000017","url":null,"abstract":"Summary form only given. Titchener (see Proc. DCC00, IEEE Society Press, p.353-62, 2000, and IEEE-ISIT, , MIT, Boston, August 1998) defined a computable grammar-based entropy measure (T-entropy) for finite strings, Ebeling, Steuer and Titchener (see Stochastics and Dynamics, vol.1, no.1, 2000) and Titchener and Ebeling (see Proc. DCC01, IEEE Society Press, p.520, 2001) demonstrated against the known results for the logistic map, to be a practical way to compute the Shannon information content for data files. A range of binary encodings of the logistic map dynamics have been prepared from a generating bi-partition and with selected normalised entropies, 0.1-1.0 bits/symbol, in steps of 0.1. This corpus of ten test files has been used to evaluate the 'absolute' performance of a series of popular compressors.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126881573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Image coding with the MAP criterion 使用MAP标准进行图像编码
Proceedings DCC 2002. Data Compression Conference Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999996
T. Eriksson, John B. Anderson, M. Novak
{"title":"Image coding with the MAP criterion","authors":"T. Eriksson, John B. Anderson, M. Novak","doi":"10.1109/DCC.2002.999996","DOIUrl":"https://doi.org/10.1109/DCC.2002.999996","url":null,"abstract":"Summary form only given. BCJR based source coding of image residuals is investigated. From a trellis representation of the residual, a joint source-channel coding system is formed. Then the BCJR algorithm is applied to find the MAP encoding. MAP and minimized squared error encoding are compared. The novelty of this work is the use of the BCJR algorithm and the MAP criterion in the source coding procedure. The source encoding system described preserves more features than an MSE based encoder. Also, blocking artifacts are reduced. Comparisons may be found in the full paper version (see http://www.it.lth.se/tomas/eriksson/spl I.bar/novak/spl I.bar/anderson/spl I.bar/dcc02.ps, 2001).","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129116564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Overhead-constrained rate-allocation for scalable video transmission over networks 网络上可扩展视频传输的开销约束速率分配
Proceedings DCC 2002. Data Compression Conference Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999998
Bo Hong, Aria Nosratinia
{"title":"Overhead-constrained rate-allocation for scalable video transmission over networks","authors":"Bo Hong, Aria Nosratinia","doi":"10.1109/DCC.2002.999998","DOIUrl":"https://doi.org/10.1109/DCC.2002.999998","url":null,"abstract":"Summary form only given. Forward error correction (FEC) based schemes are use widely to address the packet loss problem for Internet video. Given total available bandwidth, finding optimal bit allocation is very important in FEC-based video, because the FEC bit rate limits the rate available to compress video. We want to give proper protection to the source, but also prevent unwanted FEC rate expansion. The rate of packet headers is often ignored in allocating bit rate. We show that this packetization overhead has significant influence on system performance in many cases. Decreasing packet size increases the rate of packet headers, thus reducing the available rate for the source and its FEC codes. On the other hand, smaller packet size allows a larger number of packets, in which case it can be shown that the efficiency of FEC codes improves. We show that packet size should be optimized to balance the effect of packet headers and the efficiency of FEC codes. We develop a probabilistic framework for the solution of rate allocation problem in the presence of packet overhead. We implement our solution on the MPEG-4 fine granularity scalability (FGS) mode. To show the flexibility of our technique, we use an unequal error protection scheme with FGS. Experimental results show that our overhead-constrained method leads to significant improvements in reconstructed video quality.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132161557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Data compression of correlated non-binary sources using punctured turbo codes 使用穿孔涡轮码的相关非二进制源数据压缩
Proceedings DCC 2002. Data Compression Conference Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999962
Ying Zhao, J. Garcia-Frías
{"title":"Data compression of correlated non-binary sources using punctured turbo codes","authors":"Ying Zhao, J. Garcia-Frías","doi":"10.1109/DCC.2002.999962","DOIUrl":"https://doi.org/10.1109/DCC.2002.999962","url":null,"abstract":"We consider the case of two correlated non-binary sources. Data compression is achieved by transforming the sequences of non-binary symbols into sequences of bits and then using punctured turbo codes as source encoders. Each source is compressed without knowledge about the other source, and no information about the correlation between sources is required in the encoding process. Compression is achieved because of puncturing, which is adjusted to obtain the desired compression rate. The source decoder utilizes iterative schemes over the compressed binary sequences, and recovers the non-binary symbol sequences from both sources. The performance of the proposed scheme is close to the theoretical limit predicted by the Slepian-Wolf (1973) theorem.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114684128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Context tree compression of multi-component map images 多分量地图图像的上下文树压缩
Proceedings DCC 2002. Data Compression Conference Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999959
P. Kopylov, P. Fränti
{"title":"Context tree compression of multi-component map images","authors":"P. Kopylov, P. Fränti","doi":"10.1109/DCC.2002.999959","DOIUrl":"https://doi.org/10.1109/DCC.2002.999959","url":null,"abstract":"We consider compression of multi-component map images by context modeling and arithmetic coding. We apply an optimized multi-level context tree for modeling the individual binary layers. The context pixels can be located within a search area in the current layer, or in a reference layer that has already been compressed. The binary layers are compressed using an optimized processing sequence that makes maximal utilization of the inter-layer dependencies. The structure of the context tree is a static variable depth binary tree, and the context information is stored only in the leaves of the tree. The proposed technique achieves an improvement of about 25% over a static 16 pixel context template, and 15% over a similar single-level context tree.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123183984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Semi-discrete matrix transforms (SDD) for image and video compression 半离散矩阵变换(SDD)用于图像和视频压缩
Proceedings DCC 2002. Data Compression Conference Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.1000027
Sacha Zyto, A. Grama, W. Szpankowski
{"title":"Semi-discrete matrix transforms (SDD) for image and video compression","authors":"Sacha Zyto, A. Grama, W. Szpankowski","doi":"10.1109/DCC.2002.1000027","DOIUrl":"https://doi.org/10.1109/DCC.2002.1000027","url":null,"abstract":"Summary form only given. A wide variety of matrix transforms have been used for compression of image and video data. Transforms have also been used for motion estimation and quantization. One such transform is the singular-value decomposition (SVD) that relies on low rank approximations of the matrix for computational and storage efficiency. In this study, we describe the use of a variant of SVD in image and video compression. This variant, first proposed by Peleg and O'Leary, called semidiscrete decomposition (SDD), restricts the elements of the outer product vectors to 0/1/-1. Thus approximations of much higher rank can be stored for the same amount of storage. We demonstrate the superiority of SDD over SVD for a variety of compression schemes. We also show that DCT-based compression is still superior to SDD-based compression. We also demonstrate that SDD facilitates fast and accurate pattern matching and motion estimation; thus presenting excellent opportunities for improved compression.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123215075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信