Proceedings DCC '97. Data Compression Conference最新文献

筛选
英文 中文
High performance arithmetic coding for small alphabets 小字母的高性能算术编码
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582149
Xiaohui Xue, Wen Gao
{"title":"High performance arithmetic coding for small alphabets","authors":"Xiaohui Xue, Wen Gao","doi":"10.1109/DCC.1997.582149","DOIUrl":"https://doi.org/10.1109/DCC.1997.582149","url":null,"abstract":"Summary form only given. Generally, there are two main obstacles in the application of arithmetic coding. One is the relatively heavy computational burden in the coding part, since at least two multiplications are needed for each symbol. The other is that a highly efficient statistical model is hard to implement. We observe that under some important circumstances the number of different symbols in the data stream is definitely small. We specially design both the coding part and the modeling part to get a high performance arithmetic coder for the case of small alphabets. Our method is based on the improved arithmetic coding algorithm. We further improve it to be multiplication-free.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130694573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Content-adaptive postfiltering for very low bit rate video 非常低比特率视频的内容自适应后滤波
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581986
A. Jacquin, H. Okada, P. E. Crouch
{"title":"Content-adaptive postfiltering for very low bit rate video","authors":"A. Jacquin, H. Okada, P. E. Crouch","doi":"10.1109/DCC.1997.581986","DOIUrl":"https://doi.org/10.1109/DCC.1997.581986","url":null,"abstract":"We propose a postfiltering algorithm which adapts to global image quality as well as (optionally) to semantic image content extracted from the video sequence. This approach is in contrast to traditional postfiltering techniques which attempt to remove coding artifacts based on local signal characteristics only. Our postfilter is ideally suited to head-and-shoulders video coded at very low bit rates (less than 25.6 kbps), where coding artifacts are fairly strong and difficult to distinguish from fine image detail. Results are shown comparing head-and-shoulder sequences encoded at 16 kbps with an H.263-based codec to images postfiltered using the content-adaptive postfilter proposed. The postfilter manages to remove most of the mosquito artifacts introduced by the low-bit-rate coder while preserving a good rendition of facial detail.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116911247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Text compression via alphabet re-representation 通过字母重新表示的文本压缩
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582003
Philip M. Long, A. Natsev, J. Vitter
{"title":"Text compression via alphabet re-representation","authors":"Philip M. Long, A. Natsev, J. Vitter","doi":"10.1109/DCC.1997.582003","DOIUrl":"https://doi.org/10.1109/DCC.1997.582003","url":null,"abstract":"We consider re-representing the alphabet so that a representation of a character reflects its properties as a predictor of future text. This enables us to use an estimator from a restricted class to map contexts to predictions of upcoming characters. We describe an algorithm that uses this idea in conjunction with neural networks. The performance of this implementation is compared to other compression methods, such as UNIX compress, gzip, PPMC, and an alternative neural network approach.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134062356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Efficient context-based entropy coding for lossy wavelet image compression 基于上下文的有效熵编码用于有损小波图像压缩
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582047
C. Chrysafis, Antonio Ortega
{"title":"Efficient context-based entropy coding for lossy wavelet image compression","authors":"C. Chrysafis, Antonio Ortega","doi":"10.1109/DCC.1997.582047","DOIUrl":"https://doi.org/10.1109/DCC.1997.582047","url":null,"abstract":"We present an adaptive image coding algorithm based on novel backward-adaptive quantization/classification techniques. We use a simple uniform scalar quantizer to quantize the image subbands. Our algorithm puts the coefficient into one of several classes depending on the values of neighboring previously quantized coefficients. These previously quantized coefficients form contexts which are used to characterize the subband data. To each context type corresponds a different probability model and thus each subband coefficient is compressed with an arithmetic coder having the appropriate model depending on that coefficient's neighborhood. We show how the context selection can be driven by rate-distortion criteria, by choosing the contexts in a way that the total distortion for a given bit rate is minimized. Moreover the probability models for each context are initialized/updated in a very efficient way so that practically no overhead information has to be sent to the decoder. Our results are comparable or in some cases better than the recent state of the art, with our algorithm being simpler than most of the published algorithms of comparable performance.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128527063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 135
Robust image coding with perceptual-based scalability 基于感知可扩展性的鲁棒图像编码
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582133
M. G. Ramos, S. Hemami
{"title":"Robust image coding with perceptual-based scalability","authors":"M. G. Ramos, S. Hemami","doi":"10.1109/DCC.1997.582133","DOIUrl":"https://doi.org/10.1109/DCC.1997.582133","url":null,"abstract":"Summary form only given. We present a multiresolution-based image coding technique that achieves high visual quality through perceptual-based scalability and robustness to transmission errors. To achieve perceptual coding, the image is first segmented at a block level (16/spl times/16) into smooth, edge, and highly-detailed regions, using the Holder regularity property of the wavelet coefficients as well as their distributions. The activity classifications are used when coding the high-frequency wavelet coefficients. The image is compressed by first performing a 3-level hierarchical decomposition, yielding 10 subbands which are coded independently. The LL band is coded using reconstruction-optimized lapped orthogonal transforms, followed by quantization, runlength encoding, and Huffman coding. The high-frequency coefficients corresponding to the smooth regions are quantized to zero. The high-frequency coefficients corresponding to the edge regions are uniformly quantized, to maintain Holder regularity and sharpness of the edges, while those corresponding to the highly-detailed regions are quantized with a modified uniform quantizer with a dead zone. Bits are allocated based on the scale and orientation selectivity of each high-frequency subband as well as the activity regions inside each band corresponding to the edge and highly-detailed regions of the image. The quantized high-frequency bands are then run-length encoded.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"55 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114104468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Low-cost prevention of error-propagation for data compression with dynamic dictionaries 使用动态字典对数据压缩进行错误传播的低成本预防
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582007
J. Storer, J. Reif
{"title":"Low-cost prevention of error-propagation for data compression with dynamic dictionaries","authors":"J. Storer, J. Reif","doi":"10.1109/DCC.1997.582007","DOIUrl":"https://doi.org/10.1109/DCC.1997.582007","url":null,"abstract":"In earlier work we presented the k-error protocol, a technique for protecting a dynamic dictionary method from error propagation as the result of any k errors on the communication channel or compressed file. Here we further develop this approach and provide experimental evidence that this approach is highly effective in practice against a noisy channel or faulty storage medium. That is, for LZ2-based methods that \"blow up\" as a result of a single error, with the protocol in place, high error rates (with far more than the k errors for which the protocol was previously designed) can be sustained with no error propagation (the only corrupted bytes decoded are those that are part of the string represented by a pointer that was corrupted).","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114895758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Significantly lower entropy estimates for natural DNA sequences 显著降低了自然DNA序列的熵估计
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581998
D. Loewenstern, P. Yianilos
{"title":"Significantly lower entropy estimates for natural DNA sequences","authors":"D. Loewenstern, P. Yianilos","doi":"10.1109/DCC.1997.581998","DOIUrl":"https://doi.org/10.1109/DCC.1997.581998","url":null,"abstract":"If DNA were a random string over its alphabet {A,C,G,T}, an optimal code would assign 2 bits to each nucleotide. We imagine DNA to be a highly ordered, purposeful molecule, and might therefore reasonably expect statistical models of its string representation to produce much lower entropy estimates. Surprisingly this has not been the case for many natural DNA sequences, including portions of the human genome. We introduce a new statistical model (compression algorithm), the strongest reported to date, for naturally occurring DNA sequences. Conventional techniques code a nucleotide using only slightly fewer bits (1.90) than one obtains by relying only on the frequency statistics of individual nucleotides (1.95). Our method in some cases increases this gap by more than five-fold (1.66) and may lead to better performance in microbiological pattern recognition applications. One of our main contributions, and the principle source of these improvements, is the formal inclusion of inexact match information in the model. The existence of matches at various distances forms a panel of experts which are then combined into a single prediction. The structure of this combination is novel and its parameters are learned using expectation maximization (EM).","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127523276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 136
Efficient approximate adaptive coding 高效近似自适应编码
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582059
A. Turpin, Alistair Moffat
{"title":"Efficient approximate adaptive coding","authors":"A. Turpin, Alistair Moffat","doi":"10.1109/DCC.1997.582059","DOIUrl":"https://doi.org/10.1109/DCC.1997.582059","url":null,"abstract":"We describe a mechanism for approximate adaptive coding that makes use of deferred probability update to obtain good throughput rates with no buffering of symbols from the input message. Our proposed mechanism makes use of a novel code calculation process that allows an approximate code for a message of m symbols to be calculated in O(log m) time, improving upon previous methods. We also give analysis that bounds both the total computation time required to encode a message using the approximate code and the inefficiency of the resulting codeword set. Finally, experimental results are given that highlight the role the new method might play in a practical compression system. The current work builds upon two earlier papers. We previously described a mechanism for efficiently calculating a minimum-redundancy code for an alphabet in which there are many symbols with the same frequency of occurrence. We impose a modest amount of additional discipline upon the input frequencies, and show how the calculation of codewords can be performed in time and space logarithmic in the length of the message. The second area we have previously examined is the process of manipulating a code to actually perform compression. We examined mechanisms for encoding and decoding a prefix code that avoid any need for explicit enumeration of the source codewords. This means that we are free to change the source codewords at will during a message without incurring the additional cost of completely recalculating an n entry codebook.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127473037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Recursive block structured data compression 递归块结构数据压缩
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582139
M. Tilgner, M. Ishida, T. Yamaguchi
{"title":"Recursive block structured data compression","authors":"M. Tilgner, M. Ishida, T. Yamaguchi","doi":"10.1109/DCC.1997.582139","DOIUrl":"https://doi.org/10.1109/DCC.1997.582139","url":null,"abstract":"Summary form only given. A simple algorithm for efficient lossless compression of circuit test data with fast decompression speed is presented. It can easily be converted into a VLSI implementation. The algorithm is based on recursive block structured run-length coding and compresses at ratios of about 6:1 to 1000:1, higher than most of the widely known compression techniques.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121875500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Image coding using optimized significance tree quantization 图像编码采用优化的显著性树量化
Proceedings DCC '97. Data Compression Conference Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582064
G. Davis, S. Chawla
{"title":"Image coding using optimized significance tree quantization","authors":"G. Davis, S. Chawla","doi":"10.1109/DCC.1997.582064","DOIUrl":"https://doi.org/10.1109/DCC.1997.582064","url":null,"abstract":"A number of recent embedded transform coders, including Shapiro's (1993) EZW scheme, Said and Pearlman's (see IEEE Trans. Circuits and Systems for Video Technology, vol.6, no.3, p.243-250, 1996) SPIHT scheme, and Xiong et al. (see IEEE Signal Processing Letters, no.11, 1996) EZDCT scheme employ a common algorithm called significance tree quantization (STQ). Each of these coders have been selected from a large family of significance tree quantizers based on empirical work and a priori knowledge of the transform coefficient behavior. We describe an algorithm for selecting a particular form of STQ that is optimized for a given class of images. We apply our optimization procedure to the task of quantizing 8/spl times/8 DCT blocks. Our algorithm yields a fully embedded, low-complexity coder with performance from 0.7 to 2.5 dB better than baseline JPEG for standard test images.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121716998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信