2009 Data Compression Conference最新文献

筛选
英文 中文
A Comparative Study of Lossless Compression Algorithms on Multi-spectral Imager Data 多光谱成像仪数据无损压缩算法的比较研究
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1117/12.821007
M. Grossberg, I. Gladkova, S. Gottipati, M. Rabinowitz, P. Alabi, T. George, António Pacheco
{"title":"A Comparative Study of Lossless Compression Algorithms on Multi-spectral Imager Data","authors":"M. Grossberg, I. Gladkova, S. Gottipati, M. Rabinowitz, P. Alabi, T. George, António Pacheco","doi":"10.1117/12.821007","DOIUrl":"https://doi.org/10.1117/12.821007","url":null,"abstract":"High resolution multi-spectral imagers are becoming increasingly important tools for studying and monitoring the earth. As much of the data from these multi-spectral imagers is used for quantitative analysis, the role of lossless compression is critical in the transmission, distribution, archiving, and management of the data. To evaluate the performance of various compression algorithms on multi-spectral images, we conducted statistical evaluation on datasets consisting of hundreds of granules from both geostationary and polar imagers. We broke these datasets up by different criteria such as hemisphere, season, and time-of-day in order to ensure the results are robust, reliable, and applicable for future imagers.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129097341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Optimized Source-Channel Coding of Video Signals in Packet Loss Environments 丢包环境下视频信号的优化源信道编码
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.67
U. Celikcan, E. Tuncel
{"title":"Optimized Source-Channel Coding of Video Signals in Packet Loss Environments","authors":"U. Celikcan, E. Tuncel","doi":"10.1109/DCC.2009.67","DOIUrl":"https://doi.org/10.1109/DCC.2009.67","url":null,"abstract":"A novel predictive joint source-channel video coding scheme is proposed and its superiority against standard video coding is demonstrated in environments with heavy packet loss. The strength of the scheme stems from the fact that it explicitly takes into account the two modes of operation (packet loss or no packet loss) at the decoder and optimizes the corresponding reconstruction filters together with the the prediction filter at the encoder simultaneously. As a result, the prediction coefficient tends to be much smaller than both the correlation coefficient between two corresponding frames and what standard video coding techniques use (i.e., 1), thereby leaving most of the inter-frame correlation intact and increasing the error resilience.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"317 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115222940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
pFPC: A Parallel Compressor for Floating-Point Data pFPC:浮点数据的并行压缩器
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.43
Martin Burtscher, P. Ratanaworabhan
{"title":"pFPC: A Parallel Compressor for Floating-Point Data","authors":"Martin Burtscher, P. Ratanaworabhan","doi":"10.1109/DCC.2009.43","DOIUrl":"https://doi.org/10.1109/DCC.2009.43","url":null,"abstract":"This paper describes and evaluates pFPC, a parallel implementation of the lossless FPC compression algorithm for 64-bit floating-point data. pFPC can trade off compression ratio for throughput. For example, on a 4-core 3 GHz Xeon system, it compresses our nine datasets by 18% at a throughput of 1.36 gigabytes per second and by 41% at a throughput of 570 megabytes per second. Decompression is even faster. Our experiments show that the thread count should match or be a small multiple of the data's dimensionality to maximize the compression ratio and the chunk size should be at least equal to the system's page size to maximize the throughput.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121687401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Bits in Asymptotically Optimal Lossy Source Codes Are Asymptotically Bernoulli 渐近最优有损源代码中的比特是渐近伯努利的
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.21
R. Gray, T. Linder
{"title":"Bits in Asymptotically Optimal Lossy Source Codes Are Asymptotically Bernoulli","authors":"R. Gray, T. Linder","doi":"10.1109/DCC.2009.21","DOIUrl":"https://doi.org/10.1109/DCC.2009.21","url":null,"abstract":"A formal result is stated and proved showing that the bit stream produced by the encoder of a nearly optimal sliding-block source coding of a stationary and ergodic source is close to an equiprobable i.i.d. binary process.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116443135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optimization of Correlated Source Coding for Event-Based Monitoring in Sensor Networks 传感器网络中基于事件监测的相关源编码优化
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.56
J. Singh, A. Saxena, K. Rose, Upamanyu Madhow
{"title":"Optimization of Correlated Source Coding for Event-Based Monitoring in Sensor Networks","authors":"J. Singh, A. Saxena, K. Rose, Upamanyu Madhow","doi":"10.1109/DCC.2009.56","DOIUrl":"https://doi.org/10.1109/DCC.2009.56","url":null,"abstract":"Motivated by the paradigm of event-based monitoring,which can potentially alleviate the inherent bandwidth and energy constraints associated with wireless sensor networks, we consider the problem of joint coding of correlated sources under a cost criterion that is appropriately conditioned on event occurrences. The underlying premise is that individual sensors only have access to partial information and, in general, cannot reliably detect events. Hence, sensors optimally compress and transmit the data to a fusion center, so as to minimize the {emph{expected distortion in segments containing events}}. In this work, we derive and demonstrate the approach in the setting of entropy constrained distributed vector quantizer design,using a modified distortion criterion that appropriately accounts for the joint statistics of the events and the observation data. Simulation results show significant gains over conventional design as well as existing heuristic based methods, and provide experimental evidence to support the promise of our approach.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128288286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adaptive Rate Allocation Algorithm for Transmission of Multiple Embedded Bit Streams over Time-Varying Noisy Channels 时变噪声信道中多嵌入比特流传输的自适应速率分配算法
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.11
Ahmad Hatam, A. Banihashemi
{"title":"Adaptive Rate Allocation Algorithm for Transmission of Multiple Embedded Bit Streams over Time-Varying Noisy Channels","authors":"Ahmad Hatam, A. Banihashemi","doi":"10.1109/DCC.2009.11","DOIUrl":"https://doi.org/10.1109/DCC.2009.11","url":null,"abstract":"An efficient rate allocation algorithm for the progressive transmission of multiple images over time-varying noisy channels is proposed. The algorithm is initiated by the distortion optimal solution [1] for the first image and searches for the optimal rate-allocation for each subsequent image in the neighborhood of the solution for the previous image. Given the initial solution, the algorithm is linear-time in the number of transmitted packets per image and its rate allocation solution for each image can achieve a performance equal or very close to the distortion optimal solution for that image. Our simulations for the transmission of images, encoded by embedded source coders, over the binary symmetric channel (BSC) show that with very low complexity the proposed algorithm successfully adapts the channel code rates to the changes of the channel parameter.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132111288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Direction Prediction Vector Quantization for Lossless Compression of LASIS Data 面向LASIS数据无损压缩的双向预测矢量量化
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.13
Jing Ma, Chengke Wu, Yunsong Li, Keyan Wang
{"title":"Dual-Direction Prediction Vector Quantization for Lossless Compression of LASIS Data","authors":"Jing Ma, Chengke Wu, Yunsong Li, Keyan Wang","doi":"10.1109/DCC.2009.13","DOIUrl":"https://doi.org/10.1109/DCC.2009.13","url":null,"abstract":"Large Aperture Static Imaging Spectrometer(LASIS) is a new kind ofinterferometer spectrometer with the advantages of high throughputand large field of view. The LASIS data contains both spatial andspectral information in each frame which indicate the location shifting and modulatedoptical signal along Optical Path Difference(OPD). Based on these characteristics,we propose a lossless data compression method named Dual-directionPrediction Vector Quantization(DPVQ). With a dual-directionprediction on both spatial and spectral direction, redundancy inLASIS data is largely removed by minimizing the prediction residuein DPVQ. Then a fast vector quantization(VQ) avoiding codebooksplitting process is applied after prediction. Considering timeefficiency, the prediction and VQ in DPVQ are optimized to reducethe calculations, so that optimized prediction saves 60% runningtime and fast VQ saves about 25% running time with a similarquantization quality compared with classical generalized Lloydalgorithm(GLA). Experimental results show that DPVQ can achieve amaximal Compression Ratio(CR) at about 3.4, which outperforms manyexisting lossless compression algorithms.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133051453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Nonuniform Dithered Quantization 非均匀抖动量化
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.78
E. Akyol, K. Rose
{"title":"Nonuniform Dithered Quantization","authors":"E. Akyol, K. Rose","doi":"10.1109/DCC.2009.78","DOIUrl":"https://doi.org/10.1109/DCC.2009.78","url":null,"abstract":"Dithered quantization has useful properties such as producing quantization noise independent of the source and continuous reconstruction at the decoder side. Dithered quantizers have traditionally been considered within their natural setting of uniform quantization framework. A uniformly distributed (with step size matched to the quantization interval) dither signal is added before quantization and the same dither signal is subtracted from the quantized value at the decoder side (only subtractive dithering is considered in this paper). The quantized values are entropy coded conditioned on the dither signal.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133138574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Zero Padding SVD Encoder to Compress Electrocardiogram 零填充SVD编码器压缩心电图
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.48
C. Agulhari, I. S. Bonatti, P. Peres
{"title":"A Zero Padding SVD Encoder to Compress Electrocardiogram","authors":"C. Agulhari, I. S. Bonatti, P. Peres","doi":"10.1109/DCC.2009.48","DOIUrl":"https://doi.org/10.1109/DCC.2009.48","url":null,"abstract":"A new method to compress electrocardiogram (ECG) signals, whose novelty is related to the choice of an appropriate basis of representation for each ECG to be compressed using the Singular Values Decomposition (SVD), is proposed in this paper. The proposed method, named Zero Padding SVD Encoder, consists of two steps: a preprocessing step where the ECG is separated into a set of signals, which are the beat pulses of the ECG; and a compression step where the SVD is applied to the set of beat pulses in order to find the basis that better represents the entire ECG. The elements of the basis are encoded using a wavelet procedure and the coefficientes of projection of the signal on the basis are quantized using an adaptive quantization procedure. Numerical experiments are performed with the electrocardiograms of the MIT-BIH database, demonstrating the efficiency of the proposed method.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126977762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Set Partitioning in Hierarchical Frequency Bands (SPHFB) 设置分层频带划分(SPHFB)
2009 Data Compression Conference Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.63
H. Ochoa, O. Vergara-Villegas, V. Sánchez, G. Rosiles, J. Vega-Pineda
{"title":"Set Partitioning in Hierarchical Frequency Bands (SPHFB)","authors":"H. Ochoa, O. Vergara-Villegas, V. Sánchez, G. Rosiles, J. Vega-Pineda","doi":"10.1109/DCC.2009.63","DOIUrl":"https://doi.org/10.1109/DCC.2009.63","url":null,"abstract":"A novel algorithm for very low bit rate based on hierarchical partition of subbands in the wavelet domain is proposed. The algorithm uses the set partitioning technique to sort the transformed coefficients. The threshold of each subband is calculated and the subbands scanning sequence is determined by the magnitude of the thresholds which establish a hierarchical scanning not only for the set of coefficients with large magnitude, but also for the subbands. Results show that SPHFB provides good image quality for very low bit rates.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123378576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信