2009 Data Compression Conference最新文献

筛选
英文 中文
LZB: Data Compression with Bounded References 有界引用的数据压缩
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.70
M. Banikazemi
{"title":"LZB: Data Compression with Bounded References","authors":"M. Banikazemi","doi":"10.1109/DCC.2009.70","DOIUrl":"https://doi.org/10.1109/DCC.2009.70","url":null,"abstract":"In this paper, we propose a new compression/decompression algorithm called LZB which belongs to a class of algorithms related to Lempel-Ziv (LZ). The distinguishing characteristic of LZB is that it allows decompression from arbitrary points of compressed data. This is accomplished by setting a limit on how far back a reference in compressed data can directly or indirectly point to. We enforce this limit by using a sliding \"gate.\" During the compression, we keep track of the origin of each input symbol.  The origin of a symbol is the earliest symbol in the input data that the symbol (directly or indirectly) refers to. By using this information we avoid using any reference which go beyond the gate boundary.  We modified the gzip implementation of LZ77 to implement LZB. We then compared LZB with the alternative method in which data is segmented into smaller pieces and each piece is compressed separately by using the standard gzip. The results show that LZB improves the compression ratio by 10 to 50 percent for 1024 to 128 byte segment sizes.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124946666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
On Transform Coding with Dithered Quantizers 用抖动量化器进行变换编码
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.76
E. Akyol, K. Rose
{"title":"On Transform Coding with Dithered Quantizers","authors":"E. Akyol, K. Rose","doi":"10.1109/DCC.2009.76","DOIUrl":"https://doi.org/10.1109/DCC.2009.76","url":null,"abstract":"This paper is concerned with optimal transform coding in conjunction with dithered quantization.While the optimal deterministic quantizer's error is uncorrelated with the reconstructed value, the dithered quantizer yields quantization errors that are correlated with the reconstruction but are white and independent of the source. These properties offer potential benefits, but also have implications on the optimization of the rest of the coder. We derive the optimal transform for consequent dithered quantization. For fixed rate coding, we show that the transform derived for dithered quantization is universally optimal (for all sources), unlike the conventional quantization case where optimality of the Karhunen-Loeve transform is guaranteed for Gaussian sources. Moreover, we establish variable rate coding optimality for Gaussian sources.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126611698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Clustered Reversible-KLT for Progressive Lossy-to-Lossless 3d Image Coding 累进有损到无损三维图像编码的聚类可逆klt
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.7
Ian Blanes, J. Serra-Sagristà
{"title":"Clustered Reversible-KLT for Progressive Lossy-to-Lossless 3d Image Coding","authors":"Ian Blanes, J. Serra-Sagristà","doi":"10.1109/DCC.2009.7","DOIUrl":"https://doi.org/10.1109/DCC.2009.7","url":null,"abstract":"The RKLT is a lossless approximation to the KLT, and has been recently employed for progressive lossy-to-lossless coding of hyperspectral images. Both yield very good coding performance results, but at a high computational price. In this paper we investigate two RKLT clustering approaches to lessen the computational complexity problem: a normal clustering approach, which still yields good performance; and a multi-level clustering approach, which has almost no quality penalty as compared to the original RKLT. Analysis of rate-distortion evolution and of lossless compression ratio is provided. The proposed approaches supply additional benefits, such as spectral scalability, and a decrease of the side information needed to invert the transform. Furthermore,since with a clustering approach, SERM factorization coefficients are bounded to a finite range, the proposed methods allow coding of large three dimensional images within JPEG2000.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114965232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Slepian-Wolf Coding of Binary Finite Memory Source Using Burrows-Wheeler Transform 基于Burrows-Wheeler变换的二进制有限记忆源的睡狼编码
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.54
Chao Chen, Xiangyang Ji, Qionghai Dai, Xiaodong Liu
{"title":"Slepian-Wolf Coding of Binary Finite Memory Source Using Burrows-Wheeler Transform","authors":"Chao Chen, Xiangyang Ji, Qionghai Dai, Xiaodong Liu","doi":"10.1109/DCC.2009.54","DOIUrl":"https://doi.org/10.1109/DCC.2009.54","url":null,"abstract":"In this paper, an asymmetric Slepian-Wolf coding(SWC) scheme for binary finite memory source (FMS) is proposed. The a priori information about the source is extracted from the side information at the decoder by Burrows-Wheeler Transform (BWT). Such information is then utilized for the LDPC code based decoding. Benefiting from the universality of BWT, our coding scheme can be applied to any FMS. Experimental results show that our scheme performs significantly better than the scheme which does not utilize the a priori information for decoding.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133173557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Perceptual Relevance Measure for Generic Shape Coding 通用形状编码的感知关联测度
2009 Data Compression Conference Pub Date : 2009-03-01 DOI: 10.1109/DCC.2009.14
Zhongyuan Lai, Wenyu Liu, Yuan Zhang
{"title":"Perceptual Relevance Measure for Generic Shape Coding","authors":"Zhongyuan Lai, Wenyu Liu, Yuan Zhang","doi":"10.1109/DCC.2009.14","DOIUrl":"https://doi.org/10.1109/DCC.2009.14","url":null,"abstract":"We address a fundamental problem in subjective reconstruction quality by introducing a perceptual relevance measure (ADM) for generic vertex-based shape coding. Different from traditional absolute distance measure (ADM), our proposed measure systemically considers the turn angel and two adjacent segments for corresponding vertex visual significance calculation. We embed our proposed measure into top-down and bottom-up frameworks, stages of both vertex selection and adjustments, and class one and class two distortion measure definitions. The experiment results show that our proposed measure can significantly improve subjective reconstruction quality as well as objective rate-distortion performance, especially for object shape with sharp salience.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"53 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129978422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Compression-Induced Rendering Distortion Analysis for Texture/Depth Rate Allocation in 3D Video Compression 三维视频压缩中纹理/深度率分配的压缩诱导渲染失真分析
2009 Data Compression Conference Pub Date : 2009-03-01 DOI: 10.1109/DCC.2009.27
Yanwei Liu, Siwei Ma, Qingming Huang, Debin Zhao, Wen Gao, N. Zhang
{"title":"Compression-Induced Rendering Distortion Analysis for Texture/Depth Rate Allocation in 3D Video Compression","authors":"Yanwei Liu, Siwei Ma, Qingming Huang, Debin Zhao, Wen Gao, N. Zhang","doi":"10.1109/DCC.2009.27","DOIUrl":"https://doi.org/10.1109/DCC.2009.27","url":null,"abstract":"In 3D video applications, the virtual view is generally rendered by the compressed texture and depth. The texture and depth compression with different bit-rate overheads can lead to different virtual view rendering qualities. In this paper, we analyze the compression-induced rendering distortion for the virtual view. Based on the 3D warping principle, we first address how the texture and depth compression affects the virtual view quality, and then derive an upper bound for the compression-induced rendering distortion. The derived distortion bound depends on the compression-induced depth error and texture intensity error. Simulation results demonstrate that the theoretical upper bound is an approximate indication of the rendering quality and can be used to guide sequence-level texture/depth rate allocation for 3D video compression.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128701137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Joint Network-Source Video Coding Based on Lagrangian Rate Allocation 基于拉格朗日速率分配的联合网络源视频编码
2009 Data Compression Conference Pub Date : 2009-03-01 DOI: 10.1109/DCC.2009.5
Xuguang Lan, Nanning Zheng, Jianru Xue, Ce Li, Songlin Zhao
{"title":"Joint Network-Source Video Coding Based on Lagrangian Rate Allocation","authors":"Xuguang Lan, Nanning Zheng, Jianru Xue, Ce Li, Songlin Zhao","doi":"10.1109/DCC.2009.5","DOIUrl":"https://doi.org/10.1109/DCC.2009.5","url":null,"abstract":"Joint network-source video coding (JNSC) is targeted to achieve the optimum delivery of a video source to a number of destinations over network with capacity constraints. In this paper, a practical scalable multiple description coding is proposed for JNSC, based on Lagrangian rate allocation and scalable video coding. After the spatiotemporal wavelet transformation of input video sequence and the bit plane coding and context-based adaptive binary arithmetic coding, jointing network-source coding is performed on the coding passes of the code blocks using Lagrangian rate allocation. The principle relationship of the rate-distortion slop ratio with receiving probability in network links is derived under the link capacity constraints. In this way, scalable multiple descriptions can be generated to optimize the delivery to be robust and adaptive to the dynamics of heterogeneous networks. The performance of the proposed scalable multiple description coding is explored in the Peer-to-Peer streaming network.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121545340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Low Complexity Spatio-Temporal Key Frame Encoding for Wyner-Ziv Video Coding 用于Wyner-Ziv视频编码的低复杂度时空关键帧编码
2009 Data Compression Conference Pub Date : 2009-03-01 DOI: 10.1109/DCC.2009.57
Ghazaleh Esmaili, P. Cosman
{"title":"Low Complexity Spatio-Temporal Key Frame Encoding for Wyner-Ziv Video Coding","authors":"Ghazaleh Esmaili, P. Cosman","doi":"10.1109/DCC.2009.57","DOIUrl":"https://doi.org/10.1109/DCC.2009.57","url":null,"abstract":"In most Wyner-Ziv video coding approaches, the temporal correlation of key frames is not exploited since they are simply intra encoded and decoded. In this paper, using the previously decoded key frame as the side information for the key frame to be decoded, we propose new methods of coding key frames in order to improve the rate distortion performance. These schemes which are based on switching between intra mode and Wyner-Ziv mode for a given block or a given frequency band attempt to make use of both spatial and temporal correlation of key frames while satisfying the low complexity encoding requirement of Distributed Video Coding (DVC). Simulation results show that using the proposed methods, one can achieve up to 5 dB improvement over conventional intra coding for relatively low motion sequences and up to 1.3 dB improvement for relatively high motion sequences.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114470756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
On the Use of Suffix Arrays for Memory-Efficient Lempel-Ziv Data Compression 后缀数组在内存高效Lempel-Ziv数据压缩中的应用
2009 Data Compression Conference Pub Date : 2009-03-01 DOI: 10.1109/DCC.2009.50
Artur J. Ferreira, Arlindo L. Oliveira, Mário A. T. Figueiredo
{"title":"On the Use of Suffix Arrays for Memory-Efficient Lempel-Ziv Data Compression","authors":"Artur J. Ferreira, Arlindo L. Oliveira, Mário A. T. Figueiredo","doi":"10.1109/DCC.2009.50","DOIUrl":"https://doi.org/10.1109/DCC.2009.50","url":null,"abstract":"Much research has been devoted to optimizing algorithms of the Lempel-Ziv (LZ) 77 family, both in terms of speed and memory requirements. Binary search trees and suffix trees (ST) are data structures that have been often used for this purpose, as they allow fast searches at the expense of memory usage.In recent years, there has been interest on suffix arrays (SA), due to their simplicity and low memory requirements. One key issue is that an SA can solve the sub-string problem almost as efficiently as an ST, using less memory.  This paper proposes two new SA-based algorithms for LZ encoding, which require no modifications on the decoder side. Experimental results on standard benchmarks show that our algorithms, though not faster,  use 3 to 5 times less memory than the ST counterparts. Another important feature of our SA-based algorithms is that the amount of memory is independent of the text to search, thus the memory that has to be allocated can be defined a priori. These features of low and predictable memory requirements are of the utmost importance in several scenarios, such as embedded systems, where memory is at a premium and speed is not critical. Finally, we point out that the new algorithms are general, in the sense that they are adequate for applications other than LZ compression, such as text retrieval and forward/backward sub-string search.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128129396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An Implementable Scheme for Universal Lossy Compression of Discrete Markov Sources 离散马尔可夫源的通用有损压缩实现方案
2009 Data Compression Conference Pub Date : 2009-01-15 DOI: 10.1109/DCC.2009.72
S. Jalali, A. Montanari, T. Weissman
{"title":"An Implementable Scheme for Universal Lossy Compression of Discrete Markov Sources","authors":"S. Jalali, A. Montanari, T. Weissman","doi":"10.1109/DCC.2009.72","DOIUrl":"https://doi.org/10.1109/DCC.2009.72","url":null,"abstract":"We present a new lossy compressor for discrete sources. For coding a source sequence $x^n$, the encoder starts by assigning a certain cost to each reconstruction sequence. It then finds the reconstruction that minimizes this cost and describes it losslessly to the decoder via a universal lossless compressor. The cost of a sequence is given by a linear combination of its empirical probabilities of some order $k+1$ and its distortion relative to the source sequence. The linear structure of the cost in the empirical count  matrix allows the encoder to employ a Viterbi-like algorithm for obtaining the minimizing reconstruction sequence simply. We identify a choice of coefficients for the linear combination in the cost function which ensures that the algorithm universally achieves the optimum rate-distortion performance of any Markov source in the limit of large $n$, provided $k$ is increased as $o(log n)$.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128049226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信