Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)最新文献

筛选
英文 中文
Successive coefficient refinement for embedded lossless image compression 连续系数细化嵌入无损图像压缩
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672262
C. Creusere
{"title":"Successive coefficient refinement for embedded lossless image compression","authors":"C. Creusere","doi":"10.1109/DCC.1998.672262","DOIUrl":"https://doi.org/10.1109/DCC.1998.672262","url":null,"abstract":"Summary form only given. We consider here a new approach to successive coefficient refinement which speeds up embedded image compression and decompression. Rather than sending the binary refinement symbol typical of existing embedded coders, our algorithm uses a ternary refinement symbol, allowing the encoder to tell the decoder when its current approximation of a given wavelet coefficient is exact. Thus, both encoder and decoder operate faster because they process fewer refinement symbols, yet the fundamental structure of the refinement process remains unchanged, i.e. it still represents a binary subdivision of the uncertainty interval. To implement a complete encoder, we combine the proposed refinement process with Shapiro's embedded zerotree wavelet (EZW) algorithm. Results for lossless compression are shown. Without optimization, the speed increase is between 5 and 12%; with optimization, it is between 9 and 15%.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122819265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Parallel algorithms for multi-indexed recurrence relations with applications to DPCM image compression 多索引递归关系并行算法及其在DPCM图像压缩中的应用
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672326
Abdou Youssef
{"title":"Parallel algorithms for multi-indexed recurrence relations with applications to DPCM image compression","authors":"Abdou Youssef","doi":"10.1109/DCC.1998.672326","DOIUrl":"https://doi.org/10.1109/DCC.1998.672326","url":null,"abstract":"Summary form only given. DPCM decoding is essentially the computation of a 2-indexed scalar recurrence relation; the two indices are: the row and column positions of the pixels. Although several logarithmic-time parallel algorithms for solving 1-indexed recurrence relations have been designed, no work has been reported on multi-indexed recurrence relations. Considering the importance of fast DPCM decoding of imagery, parallel algorithms for solving multi-indexed recurrence relations merit serious study. We designed novel parallel algorithms for solving 2-indexed recurrence relations, and identified the parallel architectures best suited for them. We developed three approaches: index sequencing, index decoupling, and dimension shifting. To solve a 2-indexed relation in DPCM decoding of an n/spl times/n image, index sequencing breaks down the relation into a sequence of n 1-indexed scalar recurrence relations that must be solved one after another. Each relation is then solved by a parallel O(nlogn) time algorithm on an n-processor hypercube or partitionable bus. Thus, the n equations take O(nlogn) time on n processors. Index decoupling, applicable in a common case of DPCM, breaks the 2-indexed relation into n independent 1-indexed recurrence relations, which are then solved simultaneously in O(logn) parallel time, using n/sup 2/ processors configured as a hypercube or a mesh of partitionable buses.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129539193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
LZRW1 without hashing LZRW1没有哈希
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672311
Y. Reznik
{"title":"LZRW1 without hashing","authors":"Y. Reznik","doi":"10.1109/DCC.1998.672311","DOIUrl":"https://doi.org/10.1109/DCC.1998.672311","url":null,"abstract":"Summary form only given. A very fast longest-match string search algorithm for Ziv-Lempel compression has been proposed. The new algorithm uses a variable-radix search tree of limited maximum size with an appropriate node-replacement strategy. The efficiency of the new algorithm has been practically evaluated using the LZRW1 implementation as a test model. The results of the evaluation are presented in a table.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134539049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient lossless coding of medical image volumes using reversible integer wavelet transforms 利用可逆整数小波变换对医学图像体进行高效无损编码
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672188
A. Bilgin, G. Zweig, M. Marcellin
{"title":"Efficient lossless coding of medical image volumes using reversible integer wavelet transforms","authors":"A. Bilgin, G. Zweig, M. Marcellin","doi":"10.1109/DCC.1998.672188","DOIUrl":"https://doi.org/10.1109/DCC.1998.672188","url":null,"abstract":"A novel lossless medical image compression algorithm based on three-dimensional integer wavelet transforms and zerotree coding is presented. The EZW algorithm is extended to three dimensions and context-based adaptive arithmetic coding is used to improve its performance. The algorithm (3-D CB-EZW) efficiently encodes image volumes by exploiting the dependencies in all three dimensions, while enabling lossy and lossless compression from the same bitstream. Results on lossless compression of CT and MR images are presented, and compared to other lossless compression algorithms. The progressive performance of the 3-D CB-EZW algorithm is also compared to other lossy progressive coding algorithms. For representative images, the 3-D CB-EZW algorithm produced an average of 14% and 20% decrease in compressed file sizes for CT and MR images, respectively, compared to the best available 2-D lossless compression techniques.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133721414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Color image compression by stack-run-end coding 基于堆栈运行端编码的彩色图像压缩
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672319
Min-Jen Tsai
{"title":"Color image compression by stack-run-end coding","authors":"Min-Jen Tsai","doi":"10.1109/DCC.1998.672319","DOIUrl":"https://doi.org/10.1109/DCC.1998.672319","url":null,"abstract":"Summary form only given. We present a new wavelet based image coding algorithm for color image compression. The key renovation of this algorithm is based on a new context oriented information conversion for data compression. A small number of symbol sets were then designed to convert the information from the wavelet transform domain into a compact data structure for each subband. Unlike zerotree coding or its variations which utilize the intersubband relationship into its own data representation where hierarchical or parents-children dependency is performed, our work is a low complexity intrasubband based coding method which only addresses the information within the subband or combines the information across the subbands. The scheme works first by color space conversion, followed by uniform scalar quantization. A concise data structure which categorizes the quantized coefficients into (stack, run, end) data format is performed, where the raster scanning order for individual subband is the most common used method but predefined scanning order will also work. Compared with the standard stack-run coding, our method generalized the symbol representation and the extension of the symbol alphabets. The termination symbols which carry the zero value information towards the end of the subband or across the subbands till the end of the image help to speed up the decoding processes. Our experiment results show that our approach is very competitive to the refinement of zerotree type schemes.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132587113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive-rate coding modulation system for digital image transmission 数字图像传输的自适应速率编码调制系统
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672294
J. Kleider, G. Abousleman
{"title":"Adaptive-rate coding modulation system for digital image transmission","authors":"J. Kleider, G. Abousleman","doi":"10.1109/DCC.1998.672294","DOIUrl":"https://doi.org/10.1109/DCC.1998.672294","url":null,"abstract":"Summary form only given. We propose two methods to provide optimal image quality at a fixed image delivery rate for any given transmission channel condition. The first method, channel-controlled variable-rate (CCVR) image coding, employs adaptive-rate source coding and channel coding, while operating with a fixed modulation symbol rate. The second method, adaptive-rate coding-modulation (ARCM), extends the CCVR system by utilizing adaptive modulation. Both methods use a variable-compression-ratio image coder and variable-rate channel coding. The objective is to maximize the quality of the reconstructed image at the receiver when transmitted through Rayleigh fading and additive white Gaussian noise (AWGN). The CCVR system maximizes the reconstructed image quality through a bit-rate trade-off between the source and channel coders. The ARCM method provides a trade-off between the rates of source and channel coding, and the modulation rate. Both methods require knowledge of the channel state which is used by the receiver to inform the transmitter, via a feedback channel, of the optimal strategy for image compression, channel coding, and modulation format. The resulting systems achieve up to a 17 dB improvement over the peak signal-to-noise ratio (PSNR) performance of a system using a fixed-compression-ratio image coder and fixed-rate channel coding. Reconstructed image quality is evaluated through both quantitative and subjective measures using peak signal-to-noise ratio and visual analysis, respectively.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132109387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A memory-efficient adaptive Huffman coding algorithm for very large sets of symbols 一种适用于大量符号集的高效记忆自适应霍夫曼编码算法
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672310
S. Pigeon, Yoshua Bengio
{"title":"A memory-efficient adaptive Huffman coding algorithm for very large sets of symbols","authors":"S. Pigeon, Yoshua Bengio","doi":"10.1109/DCC.1998.672310","DOIUrl":"https://doi.org/10.1109/DCC.1998.672310","url":null,"abstract":"Summary form only given. The problem of computing the minimum redundancy codes as we observe symbols one by one has received a lot of attention. However, existing algorithms implicitly assumes that either we have a small alphabet or that we have an arbitrary amount of memory at our disposal for the creation of a coding tree. In real life applications one may need to encode symbols coming from a much larger alphabet, for e.g. coding integers. We introduce a new algorithm for adaptive Huffman coding, called algorithm M, that uses space proportional to the number of frequency classes. The algorithm uses a tree with leaves that represent sets of symbols with the same frequency, rather than individual symbols. The code for each symbol is therefore composed of a prefix (specifying the set, or the leaf of the tree) and a suffix (specifying the symbol within the set of same-frequency symbols). The algorithm uses only two operations to remain as close as possible to the optimal: set migration and rebalancing. We analyze the computational complexity of algorithm M, and point to its advantages in terms of low memory complexity and fast decoding. Comparative experiments were performed with algorithm M on the Calgary corpus, with static Huffman coding as well as with another adaptive Huffman coding algorithms, algorithm /spl Lambda/ of Vitter. Experiments show that M performs comparably or better than the other algorithms but requires much less memory. Finally, we present an improved algorithm, M/sup +/, for non-stationary data, which models the distribution of the data in a fixed-size window in the data sequence.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132234454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Codeword assignment for fixed-length entropy coded video streams 定长熵编码视频流的码字分配
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672155
Ramon Llados-Bernaus, R. Stevenson
{"title":"Codeword assignment for fixed-length entropy coded video streams","authors":"Ramon Llados-Bernaus, R. Stevenson","doi":"10.1109/DCC.1998.672155","DOIUrl":"https://doi.org/10.1109/DCC.1998.672155","url":null,"abstract":"Fixed length entropy codes (FLC) have been successfully employed as an alternative to the popular variable length codes (VLC) in video codecs. This paper presents a codeword assignment for FLC coded interframe coefficients and motion vectors that minimizes the effects of bit errors on the decoded video sequence. Intensive testing has shown that with the proposed solution, the distortion introduced by transmission errors is coarsely half of that obtained with VLC codes, while maintaining the compression efficiency.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129068538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Correcting English text using PPM models 使用PPM模型校正英文文本
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672157
W. Teahan, S. Inglis, J. Cleary, Geoffrey Holmes
{"title":"Correcting English text using PPM models","authors":"W. Teahan, S. Inglis, J. Cleary, Geoffrey Holmes","doi":"10.1109/DCC.1998.672157","DOIUrl":"https://doi.org/10.1109/DCC.1998.672157","url":null,"abstract":"An essential component of many applications in natural language processing is a language modeler able to correct errors in the text being processed. For optical character recognition (OCR), poor scanning quality or extraneous pixels in the image may cause one or more characters to be mis-recognized, while for spelling correction, two characters may be transposed, or a character may be inadvertently inserted or missed out, This paper describes a method for correcting English text using a PPM model. A method that segments words in English text is introduced and is shown to be a significant improvement over previously used methods. A similar technique is also applied as a post-processing stage after pages have been recognized by a state-of-the-art commercial OCR system. We show that the accuracy of the OCR system can be increased from 96.3% to 96.9%, a decrease of about 14 errors per page.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117126844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Improved lossless halftone image coding using a fast adaptive context template selection scheme 改进的无损半色调图像编码使用快速自适应上下文模板选择方案
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672265
K. Denecker, P. Neve, I. Lemahieu
{"title":"Improved lossless halftone image coding using a fast adaptive context template selection scheme","authors":"K. Denecker, P. Neve, I. Lemahieu","doi":"10.1109/DCC.1998.672265","DOIUrl":"https://doi.org/10.1109/DCC.1998.672265","url":null,"abstract":"Applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. We present an improvement on the context modeling scheme by adapting the context template to the periodic structure of the halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context template selection scheme based on the calculation and sorting of the autocorrelation function on a part of the image. For evaluating our scheme, we have also investigated the compression performance of a suboptimal approach based on an incremental exhaustive search; this approach can be used in practice for decompression but not for compression because of timing requirements. We have experimented with classical halftones of different resolutions and sizes and screened under different angles.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130831623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信