2010 Data Compression Conference最新文献

筛选
英文 中文
An Efficient Algorithm for Almost Instantaneous VF Code Using Multiplexed Parse Tree 一种基于多路解析树的几乎瞬时VF代码的高效算法
2010 Data Compression Conference Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.27
S. Yoshida, T. Kida
{"title":"An Efficient Algorithm for Almost Instantaneous VF Code Using Multiplexed Parse Tree","authors":"S. Yoshida, T. Kida","doi":"10.1109/DCC.2010.27","DOIUrl":"https://doi.org/10.1109/DCC.2010.27","url":null,"abstract":"Almost Instantaneous VF code proposed by Yamamoto and Yokoo in 2001, which is one of the variable-length-to-fixed-length codes, uses a set of parse trees and achieves a good compression ratio. However, it needs much time and space for both encoding and decoding than an ordinary VF code does. In this paper, we proved that we can multiplex the set of parse trees into a compact single tree and simulate the original encoding and decoding procedures. Our technique reduces the total number of nodes into O(2^l k - k2), while it is originally O(2^l k), where l and k are the codeword length and the alphabet size, respectively. The experimental results showed that we could encode and decode over three times faster for natural language texts by using this technique.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128838491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Data Compression Based on a Dictionary Method Using Recursive Construction of T-Codes 基于t码递归构造字典方法的数据压缩
2010 Data Compression Conference Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.68
K. Hamano, Hirosuke Yamamoto
{"title":"Data Compression Based on a Dictionary Method Using Recursive Construction of T-Codes","authors":"K. Hamano, Hirosuke Yamamoto","doi":"10.1109/DCC.2010.68","DOIUrl":"https://doi.org/10.1109/DCC.2010.68","url":null,"abstract":"We propose a new data compression scheme based on T-codes [3] using a dictionary method such that all phrases added to a dictionary have a recursive structure similar to T-codes. Our scheme can compress the Calgary Corpus more efficiently than known schemes based on T-codes [2] and the UNIX compress, a variant of LZ78.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127828826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A New Searchable Variable-to-Variable Compressor 一种新的可搜索变量对变量压缩器
2010 Data Compression Conference Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.25
N. Brisaboa, A. Fariña, Juan R. Lopez, G. Navarro, Eduardo Rodríguez López
{"title":"A New Searchable Variable-to-Variable Compressor","authors":"N. Brisaboa, A. Fariña, Juan R. Lopez, G. Navarro, Eduardo Rodríguez López","doi":"10.1109/DCC.2010.25","DOIUrl":"https://doi.org/10.1109/DCC.2010.25","url":null,"abstract":"Word-based compression over natural language text has shown to be a good choice to trade compression ratio and speed, obtaining compression ratios close to 30% and very fast decompression. Additionally, it permits fast searches over the compressed text using Boyer-Moore type algorithms. Such compressors are based on processing fixed source symbols (words) and assigning them variable-byte-length codewords, thus following a fixed-to-variable approach. We present a new variable-to-variable compressor (v2vdc) that uses words and phrases as the source symbols, which are encoded with a variable-length scheme. The phrases are chosen using the longest common prefix information on the suffix array of the text, so as to favor long and frequent phrases. We obtain compression ratios close to those of p7zip and ppmdi, overcoming bzip2, and 8-10 percentage points less than the equivalent word-based compressor. V2vdc is in addition among the fastest to decompress, and allows efficient direct search of the compressed text, in some cases the fastest to date as well.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128079404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
An MCMC Approach to Lossy Compression of Continuous Sources 连续源有损压缩的MCMC方法
2010 Data Compression Conference Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.11
D. Baron, T. Weissman
{"title":"An MCMC Approach to Lossy Compression of Continuous Sources","authors":"D. Baron, T. Weissman","doi":"10.1109/DCC.2010.11","DOIUrl":"https://doi.org/10.1109/DCC.2010.11","url":null,"abstract":"Motivated by the Markov chain Monte Carlo (MCMC) relaxation method of Jalali and Weissman, we propose a lossy compression algorithm for continuous amplitude sources that relies on a finite reproduction alphabet that grows with the input length. Our algorithm asymptotically achieves the optimum rate distortion (RD) function universally for stationary ergodic continuous amplitude sources. However, the large alphabet slows down the convergence to the RD function, and is thus an impediment in practice. We thus propose an MCMC-based algorithm that uses a (smaller) adaptive reproduction alphabet. In addition to computational advantages, the reduced alphabet accelerates convergenceto the RD function, and is thus more suitable in practice.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114614314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences 预测编码视频序列的估计理论延迟解码
2010 Data Compression Conference Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.18
Jingning Han, Vinay Melkote, K. Rose
{"title":"Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences","authors":"Jingning Han, Vinay Melkote, K. Rose","doi":"10.1109/DCC.2010.18","DOIUrl":"https://doi.org/10.1109/DCC.2010.18","url":null,"abstract":"Current video coding schemes employ motion compensation to exploit the fact that the signal forms an auto-regressive process along the motion trajectory, and remove temporal redundancies with prior reconstructed samples via prediction. However, the decoder may, in principle, also exploit correlations with received encoding information of future frames. In contrast to current decoders that reconstruct every block immediately as the corresponding quantization indices are available, we propose an estimation-theoretic delayed decoding scheme which leverages quantization and motion information of one or more future frames to refine the reconstruction of the current block. The scheme, implemented in the transform domain, efficiently combines all available (including future) information in an appropriately derived conditional pdf, to obtain the optimal delayed reconstruction of each transform coefficient in the frame. Experiments demonstrate substantial gains over the standard H.264 decoder. The scheme learns the autoregressive model from information available to the decoder, and compatibility with the standard syntax and existing encoders is retained.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127898060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Segment-Parallel Predictor for FPGA-Based Hardware Compressor and Decompressor of Floating-Point Data Streams to Enhance Memory I/O Bandwidth 基于fpga的浮点数据流硬件压缩和解压缩的段并行预测器,以提高内存I/O带宽
2010 Data Compression Conference Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.44
K. Sano, Kazuya Katahira, S. Yamamoto
{"title":"Segment-Parallel Predictor for FPGA-Based Hardware Compressor and Decompressor of Floating-Point Data Streams to Enhance Memory I/O Bandwidth","authors":"K. Sano, Kazuya Katahira, S. Yamamoto","doi":"10.1109/DCC.2010.44","DOIUrl":"https://doi.org/10.1109/DCC.2010.44","url":null,"abstract":"This paper presents segment-parallel prediction for high-throughput compression and decompression of floating-point data streams on an FPGA-based LBM accelerator. In order to enhance the actual memory I/O bandwidth of the accelerator, we focus on the prediction-based compression of floating-point data streams. Although hardware implementation is essential to high-throughput compression, the feedback loop in the decompressor is a bottleneck due to sequential predictions necessary for bit reconstruction. We introduce a segment-parallel approach to the 1D polynomial predictor to achieve the required throughput for decompression. We evaluate the compression ratio of the segment-parallel cubic prediction with various encoders of prediction difference.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123869812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
On the Systematic Measurement Matrix for Compressed Sensing in the Presence of Gross Errors 存在严重误差的压缩感知系统测量矩阵研究
2010 Data Compression Conference Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.38
Zhi Li, Feng Wu, John Wright
{"title":"On the Systematic Measurement Matrix for Compressed Sensing in the Presence of Gross Errors","authors":"Zhi Li, Feng Wu, John Wright","doi":"10.1109/DCC.2010.38","DOIUrl":"https://doi.org/10.1109/DCC.2010.38","url":null,"abstract":"Inspired by syndrome source coding using linear error-correcting codes, we explore a new form of measurement matrix for compressed sensing. The proposed matrix is constructed in the systematic form [A I], where A is a randomly generated submatrix with elements distributed according to i.i.d. Gaussian, and I is the identity matrix. In the noiseless setting, this systematic construction retains similar property as the conventional Gaussian ensemble achieves. However, in the noisy setting with gross errors of arbitrary magnitude, where Gaussian ensemble fails catastrophically, systematic construction displays strong stability. In this paper, we prove its stable reconstruction property. We further show its l1-norm sparsity recovery property by proving its restricted isometry property (RIP). We also demonstrate how the systematic matrix can be used to design a family of lossy-to-lossless compressed sensing schemes where the number of measurements trades off the reconstruction distortions.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"385 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121775307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Local Modeling for WebGraph Compression WebGraph压缩的局部建模
2010 Data Compression Conference Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.59
V. Anh, Alistair Moffat
{"title":"Local Modeling for WebGraph Compression","authors":"V. Anh, Alistair Moffat","doi":"10.1109/DCC.2010.59","DOIUrl":"https://doi.org/10.1109/DCC.2010.59","url":null,"abstract":"We describe a simple hierarchical scheme for webgraph compression, which supports efficient in-memory and from-disk decoding of page neighborhoods, for neighborhoods defined for both incoming and outgoing links. The scheme is highly competitive in terms of both compression effectiveness and decoding speed.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129719987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Batch-Pipelining for H.264 Decoding on Multicore Systems 多核系统上H.264解码的批处理流水线
2010 Data Compression Conference Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.57
Tang-Hsun Tu, Chih-wen Hsueh
{"title":"Batch-Pipelining for H.264 Decoding on Multicore Systems","authors":"Tang-Hsun Tu, Chih-wen Hsueh","doi":"10.1109/DCC.2010.57","DOIUrl":"https://doi.org/10.1109/DCC.2010.57","url":null,"abstract":"Pipelining has been applied in many area to improve performance by overlapping executions of computing stages. However, it is difficult to apply on H.264/AVC decoding in frame level, because the bitstreams are encoded with lots of dependencies and little parallelism is left to be explored. Therefore, many researches can only adopt hardware assistance. Fortunately, pure software pipelining can be applied on H.264/AVC decoding in macroblock level with reasonable performance gain. However, the pipeline stages might need to synchronize with other stages and incur lots of extra overhead. Moreover, the overhead becomes relatively larger as the stages themselves are executed faster with better hardware and software optimization. We first group multiple stages into larger groups as ”batched” pipelining to execute concurrently in multicore systems. The stages in different groups might not need to synchronize to each other so that it incurs little overhead and can be highly scalable. Therefore, a novel effective batch-pipeline (BP) approach for H.264/AVC decoding on multicore systems is proposed. Moreover, because of its flexibility, BP can be used with other hardware approaches or software technologies to further improve performance. To optimize our approach, we analyze how to group the macroblocks and derive close-form formulas to guide the grouping. We also conduct various experiments on various bitstreams to verify our approach. The results show that it can speed up to 93% and achieve up to 249 and 70 FPS for 720P and 1080P resolutions, respectively, on a 4-core machine over a published optimized H.264 decoder.We believe our batch-pipelining approach creates a new effective direction for multimedia software codec development.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129968441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Development of Optimum Lossless Compression Systems for Space Missions 用于航天任务的最佳无损压缩系统的开发
2010 Data Compression Conference Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.70
A. G. Villafranca, J. Portell, E. García–Berro
{"title":"Development of Optimum Lossless Compression Systems for Space Missions","authors":"A. G. Villafranca, J. Portell, E. García–Berro","doi":"10.1109/DCC.2010.70","DOIUrl":"https://doi.org/10.1109/DCC.2010.70","url":null,"abstract":"The scientific instruments included in modern space missions require high compression ratios in order to downlink all the acquired data to the ground. In many cases, this must be achieved without losses and the available processing power is modest. Algorithms requiring large amounts of data for their optimum operation cannot be used due to the limited reliability of the communications channel. Existing methods for lossless data compression often have difficulties in fulfilling such tight requirements. We present a method for the development of lossless compression systems achieving high compression ratios at a low processing cost while guaranteeing a reliable downlink. This is done using a two–stage compressor, with an adequate pre–processing stage followed by an entropy coder. The pre–processor should be tailored for each case and carefully evaluated. For the second stage, we analyze some existing solutions and we present a new entropy coder, which has comparable or even better performances than those offered by most coders and guarantees high ratios in front of outliers. Finally, we present the application of this method to the case of the Gaia mission and we present the results obtained.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128883840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信