2009 Data Compression Conference最新文献

筛选
英文 中文
Affine Modeling for the Complexity of Vector Quantizers 向量量化器复杂性的仿射建模
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.55
Estevan P. Seraco, J. Gomes
{"title":"Affine Modeling for the Complexity of Vector Quantizers","authors":"Estevan P. Seraco, J. Gomes","doi":"10.1109/DCC.2009.55","DOIUrl":"https://doi.org/10.1109/DCC.2009.55","url":null,"abstract":"We use a scalar function Θ to describe the complexity of data compression systems based on vector quantizers (VQs). This function is associated with the analog hardware implementation of a VQ, as done for example in focal-plane image compression systems. The rate and distortion of a VQ are represented by a Lagrangian cost function J. In this work we propose an affine model for the relationship between J and Θ, based on several VQ encoders performing the map R^M → {1, 2, . . . ,K}. A discrete source is obtained by partitioning images into 4×4 pixel blocks and extracting M = 4 principal components from each block. To design entropy-constrained VQs (ECVQs), we use the Generalized Lloyd Algorithm. To design simple interpolative VQs (IVQs), we consider only the simplest encoder: a linear transformation, followed by a layer of M scalar quantizers in parallel – the K cells of RM are defined by a set of thresholds {t1, . . . , tT}. The T thresholds are obtained from a non-linear unconstrained optimization method based on the Nelder-Mead algorithm.The fundamental unit of complexity Θ is \"transistor\": we only count the transistors that are used to implement the signal processing part of a VQ analog circuit: inner products, squares, summations, winner-takes-all, and comparators. The complexity functions for ECVQs and IVQs are as follows: ΘECVQ = 2KM + 9K + 3M + 4 and ΘIVQ = 4Mw1+2Mw2+3Mb1+Mb2+4M+3T, where Mw1 and Mw2 are the numbers of multiplications by positive and by negative weights. The numbers of positive and negative bias values are Mb1 and Mb2. Since ΘECVQ and ΘIVQ are scalar functions gathering the complexities of several different operations under the same unit, they are useful for the development of models relating rate-distortion cost to complexity.Using a training set, we designed several ECVQs and plotted all (J, Θ) points on a plane with axes log10(Θ) and log10(J) (J values from a test set). An affine model log10(Θ) = a1 log10(J) + a2 became apparent; a straightforward application of least squares yields the slope and offset coefficients. This procedure was repeated for IVQs. The error between the model and the data has a variance equal to 0.005 for ECVQs and 0.02 for IVQs. To validate the ECVQ and IVQ complexity models, we repeated the design and test procedure using new training and test sets. Then, we used the previously computed complexity models to predict the Θ of the VQs designed independently: the error between the model and the data has a variance equal to 0.01 for ECVQs and 0.02 for IVQs. This shows we are able to predict the rate-distortion performance of independently designed ECVQs and IVQs. This result serves as a starting point for studies on complexity gradients between J and Θ, and as a guideline for introducing complexity constraints in the traditional entropy-constrained Lagrangian cost.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116216051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Invertible Integer Lie Group Transforms 可逆整数李群变换
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.38
Yusong Yan, Hongmei Zhu
{"title":"Invertible Integer Lie Group Transforms","authors":"Yusong Yan, Hongmei Zhu","doi":"10.1109/DCC.2009.38","DOIUrl":"https://doi.org/10.1109/DCC.2009.38","url":null,"abstract":"Invertible integer transforms are essential for lossless source encoding. Using lifting schemes, we develop a new family of invertible integer transforms based on discrete generalized cosine transforms. The discrete generalized cosine transforms that arise in connection with compact semi-simple Lie groups of rank 2, are orthogonal over a fundamental region and have recently attracted more attention in digital image processing. Since these integer transforms are invertible, they have potential applications in lossless image compression and encryption.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130665383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Source Coding Scheme for Multiple Sequence Alignments 多序列比对的源编码方案
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.64
P. Hanus, J. Dingel, Georg Chalkidis, J. Hagenauer
{"title":"Source Coding Scheme for Multiple Sequence Alignments","authors":"P. Hanus, J. Dingel, Georg Chalkidis, J. Hagenauer","doi":"10.1109/DCC.2009.64","DOIUrl":"https://doi.org/10.1109/DCC.2009.64","url":null,"abstract":"Rapid development of DNA sequencing technologies exponentially increases the amount of publicly available genomic data. Whole genome multiple sequence alignments represent a particularly voluminous, frequently downloaded static dataset. In this work we propose an asymmetric source coding scheme for such alignments using evolutionary prediction in combination with lossless black and white image compression. Compared to the Lempel-Ziv algorithm used so far the compression rates are almost halved.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134511456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Decentralized Estimation Using Learning Vector Quantization 使用学习向量量化的分散估计
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.77
Mihajlo Grbovic, S. Vucetic
{"title":"Decentralized Estimation Using Learning Vector Quantization","authors":"Mihajlo Grbovic, S. Vucetic","doi":"10.1109/DCC.2009.77","DOIUrl":"https://doi.org/10.1109/DCC.2009.77","url":null,"abstract":"Decentralized estimation is an essential problem for a number of data fusion applications. In this paper we propose a variation of the Learning Vector Quantization (LVQ) algorithm, the Distortion Sensitive LVQ (DSLVQ), to be used for quantizer design in decentralized estimation. Experimental results suggest that DSLVQ results in high-quality quantizers and that it allows easy adjustment of the complexity of the resulting quantizers to computational constraints of decentralized sensors. In addition, DSLVQ approach shows significant improvements over the popular LVQ2 algorithm as well as the previously proposed Regression Tree approach for decentralized estimation.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134518521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Adaptive Sub-sampling Method for In-memory Compression of Scientific Data 一种科学数据内存压缩的自适应子采样方法
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.65
D. Unat, T. Hromadka, S. Baden
{"title":"An Adaptive Sub-sampling Method for In-memory Compression of Scientific Data","authors":"D. Unat, T. Hromadka, S. Baden","doi":"10.1109/DCC.2009.65","DOIUrl":"https://doi.org/10.1109/DCC.2009.65","url":null,"abstract":"A  current challenge in scientific computing is how to curb the growth of simulation datasets without  losing valuable information. While  wavelet based methods are popular, they require that data be decompressed before it can analyzed,for example, when identifying time-dependent structures in turbulent flows. We present Adaptive Coarsening, an adaptive subsampling compression strategy that enables the compressed data product to be directly manipulated in memory without requiring costly decompression.We demonstrate compression factors of up to 8 in turbulent flow simulations in three dimensions.Our compression strategy produces a non-progressive multiresolution representation, subdividing the dataset into fixed sized regions and compressing each region independently.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132346101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
l1 Compression of Image Sequences Using the Structural Similarity Index Measure 基于结构相似指数的图像序列压缩
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.28
J. Dahl, Jan Østergaard, T. L. Jensen, S. H. Jensen
{"title":"l1 Compression of Image Sequences Using the Structural Similarity Index Measure","authors":"J. Dahl, Jan Østergaard, T. L. Jensen, S. H. Jensen","doi":"10.1109/DCC.2009.28","DOIUrl":"https://doi.org/10.1109/DCC.2009.28","url":null,"abstract":"We consider lossy compression of image sequences using l1-compression with overcomplete dictionaries. As a fidelity measure for the reconstruction quality, we incorporate the recently proposed structural similarity index measure, and we show that this leads to problem formulations that are very similar to conventional l1 compression algorithms. In addition, we develop efficient large-scale algorithms used for joint encoding of multiple image frames.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"77 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128812644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Entropy Coding via Parametric Source Model with Applications in Fast and Efficient Compression of Image and Video Data 参数源模型熵编码及其在图像和视频数据快速高效压缩中的应用
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.80
K. Minoo, Truong Q. Nguyen
{"title":"Entropy Coding via Parametric Source Model with Applications in Fast and Efficient Compression of Image and Video Data","authors":"K. Minoo, Truong Q. Nguyen","doi":"10.1109/DCC.2009.80","DOIUrl":"https://doi.org/10.1109/DCC.2009.80","url":null,"abstract":"In this paper a framework is proposed for efficient entropy coding of data which can be represented by a parametric distribution model. Based on the proposed framework, an entropy coder achieves coding efficiency by estimating the parameters of the statistical model (for the coded data), either via Maximum A Posteriori (MAP) or Maximum Likelihood (ML) parameter estimation techniques.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126027977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
High Performance Word-Codeword Mapping Algorithm on PPM 基于PPM的高性能字码字映射算法
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.40
J. Adiego, Miguel A. Martínez-Prieto, P. Fuente
{"title":"High Performance Word-Codeword Mapping Algorithm on PPM","authors":"J. Adiego, Miguel A. Martínez-Prieto, P. Fuente","doi":"10.1109/DCC.2009.40","DOIUrl":"https://doi.org/10.1109/DCC.2009.40","url":null,"abstract":"The word-codeword mapping technique allows words to be managed in PPM modelling when a natural language text file is being compressed. The main idea for managing words is to assign them codes in order to improve the compression. The previous work was focused on proposing several mapping adaptive algorithms and evaluating them. In this paper, we propose a semi-static word-codeword mapping method that takes advantage of by previous knowledge of some statistical data of the vocabulary. We test our idea implementing a basic prototype, dubbed mppm2, which also retains all the desirable features of a word-codeword mapping technique. The comparison with other techniques and compressors shows that our proposal is a very competitive choice for compressing natural language texts. In fact, empirical results show that our prototype achieves a very good compression for this type of documents.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125349817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Practical Parallel Algorithms for Dictionary Data Compression 字典数据压缩的实用并行算法
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.84
L. Cinque, S. Agostino, L. Lombardi
{"title":"Practical Parallel Algorithms for Dictionary Data Compression","authors":"L. Cinque, S. Agostino, L. Lombardi","doi":"10.1109/DCC.2009.84","DOIUrl":"https://doi.org/10.1109/DCC.2009.84","url":null,"abstract":"PRAM CREW parallel algorithms requiring logarithmic time and a linear number of processors exist for sliding (LZ1) and static dictionary compression. On the other hand, LZ2 compression seems hard to parallelize. Both adaptive methods work with prefix dictionaries, that is, all prefixes of a dictionary element are dictionary elements.Therefore, it is reasonable to use prefix dictionaries also for the static method. A left to right semi-greedy approach exists to compute an optimal parsing of a string with a prefix static dictionary. The left to right greedy approach is enough to achieve optimal compression with a sliding dictionary since such dictionary is both prefix and suffix. We assume the window is bounded by a constant. With the practical assumption that the dictionary elements have constant length we present PRAM EREW algorithms for sliding and static dictionary compression still requiring logarithmic time and a linear number of processors. A PRAM EREW decoder for static dictionary compression can be easily designed with a linear number of processors and logarithmic time. A work-optimal logarithmic time PRAM EREW decoder exists for sliding dictionary compression when the window has constant length. The simplest model for parallel computation is an array of processors with distibuted memory and no interconnections, therefore, no communication cost. An approximation scheme to optimal compression with prefix static dictionaries was designed running with the same complexity of the previous algorithms on such model. It was presented for a massively parallel architecture but in virtue of its scalability it can be implemented on a small scale system as well.We describe such approach and extend it to the sliding dictionary method. The approximation scheme for sliding dictionaries is suitable for small scale systems but due to its adaptiveness it is practical for a large scale system when the file size is large. A two-dimensional extension of the sliding dictionary method to lossless compression of bi-level images, called BLOCK MATCHING, is also discussed. We designed a parallel implementation of such heuristic on a constant size array of processors and experimented it with up to 32 processors of a 256 Intel Xeon 3.06 GHz  processors machine (avogadro.cilea.it) on a test set of large topographic images. We achieved the expected speed-up, obtaining parallel compression and decompression about twenty-five times faster than the sequential ones.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121887791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
H.264/MPEG-4 AVC Encoder Parameter Selection Algorithms for Complexity Distortion Tradeoff H.264/MPEG-4 AVC编码器参数选择算法的复杂性失真权衡
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.53
R. Vanam, E. Riskin, R. Ladner
{"title":"H.264/MPEG-4 AVC Encoder Parameter Selection Algorithms for Complexity Distortion Tradeoff","authors":"R. Vanam, E. Riskin, R. Ladner","doi":"10.1109/DCC.2009.53","DOIUrl":"https://doi.org/10.1109/DCC.2009.53","url":null,"abstract":"The H.264 encoder has input parameters that determine the bit rate and distortion of the compressed video and the encoding complexity. A set of encoder parameters is referred to as a parameter setting. We previously proposed two offline algorithms for choosing H.264 encoder parameter settings that have distortion-complexity performance close to the parameter settings obtained from an exhaustive search, but take significantly fewer encodings. However they generate only a few parameter settings. If there is no available parameter settings for a given encode time, the encoder will need to use a lower complexity parameter setting resulting in a decrease in peak-signal-to-noise-ratio (PSNR). In this paper, we propose two algorithms for finding additional parameter settings over our previous algorithm and show that they improve the PSNR by up to 0.71 dB and 0.43 dB, respectively. We test both our algorithms on Linux and PocketPC platforms.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121979853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信