Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)最新文献

筛选
英文 中文
Line based reduced memory, wavelet image compression 基于行压缩内存,小波图像压缩
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672177
C. Chrysafis, Antonio Ortega
{"title":"Line based reduced memory, wavelet image compression","authors":"C. Chrysafis, Antonio Ortega","doi":"10.1109/DCC.1998.672177","DOIUrl":"https://doi.org/10.1109/DCC.1998.672177","url":null,"abstract":"In this work we propose a novel algorithm for wavelet based image compression with very low memory requirements. The wavelet transform is performed progressively and we only require that a reduced number of lines from the original image be stored at any given time. The result of the wavelet transform is the same as if we were operating on the whole image, the only difference being that the coefficients of different subbands are generated in an interleaved fashion. We begin encoding the (interleaved) wavelet coefficients as soon as they become available. We classify each new coefficient in one of several classes, each corresponding to a different probability model, with the models being adapted on the fly for each image. Our scheme is fully backward adaptive and it relies only on coefficients that have already been transmitted. Our experiments demonstrate that our coder is still very competitive with respect to similar state-of-the-art coders. It is noted that schemes based on zero trees or bit plane encoding basically require the whole image to be transformed (or else have to be implemented using tiling). The features of the algorithm make it well suited for a low memory mode coding within the emerging JPEG2000 standard.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125965267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 392
Analysis and comparison of various image downsampling and upsampling methods 各种图像下采样和上采样方法的分析和比较
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672325
Abdou Youssef
{"title":"Analysis and comparison of various image downsampling and upsampling methods","authors":"Abdou Youssef","doi":"10.1109/DCC.1998.672325","DOIUrl":"https://doi.org/10.1109/DCC.1998.672325","url":null,"abstract":"Summary form only given. The goal is to gain a better understanding of the behavior of the image down/upsampling combinations, and find better down/upsampling methods. We examined existing down/upsampling methods and proposed new ones. We formulated a frequency response approach for understanding and evaluating down/upsampling combinations. The approach was validated experimentally by running the methods on various images and computing the signal to noise ratio (SNR) between the original and the down-then-upsampled images. The frequency response based evaluation correlates well with the experimental evaluation. Down/upsampling combinations were studied in a unified framework. Signals are pre-filtered then decimated by two, resulting in downsampling by two. Afterwards, signals are zero-upsampled by 2, i.e., inserting 0s between successive samples, and then post-filtering. Our analysis showed that for optimal performance, the pre-filter and the post-filter should both be low-pass filters with cutoff at /spl pi//2. We considered five classes of filters. The first corresponds to the simplest down/upsampling combination, decimation/duplication, where decimation is simply the skipping of every other row and every other column, and duplication (for upsampling) involves duplicating every row and every column. The second class corresponds to bilinear interpolation, for both upsampling and downsampling. The third class comprises the biorthogonal and orthogonal wavelets. The fourth class we termed binomial filters. The fifth class consists of least-square FIR filters.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124071149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Bayesian framework for content-based indexing and retrieval 基于内容的索引和检索的贝叶斯框架
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672322
N. Vasconcelos, A. Lippman
{"title":"A Bayesian framework for content-based indexing and retrieval","authors":"N. Vasconcelos, A. Lippman","doi":"10.1109/DCC.1998.672322","DOIUrl":"https://doi.org/10.1109/DCC.1998.672322","url":null,"abstract":"Summary form only given. One of the important requirements for practical retrieval systems is the ability to jointly address the issues of indexing and compression. By formulating query by example as a problem of Bayesian inference and establishing a link between probability density estimation and vector quantization, we have previously introduced a representation that leads to very efficient procedures for indexing and retrieval directly in the compressed domain without compromise of the coding efficiency. In this paper, we build on the potential of the Bayesian formulation to support sophisticated inference, to incorporate this representation in a very flexible indexing and retrieval framework that (1) leads to intuitive retrieval procedures, (2) can integrate different content modalities to eliminate some of the strongest limitations of the query by example paradigm, and (3) supports statistical learning of all the model parameters and can, therefore, be trained automatically.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127771639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Japanese text compression using word-based coding 使用基于单词的编码进行日文文本压缩
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672306
T. Morihara, N. Satoh, H. Yahagi, S. Yoshida
{"title":"Japanese text compression using word-based coding","authors":"T. Morihara, N. Satoh, H. Yahagi, S. Yoshida","doi":"10.1109/DCC.1998.672306","DOIUrl":"https://doi.org/10.1109/DCC.1998.672306","url":null,"abstract":"Summary form only given. Since Japanese characters are encoded in 16-bit, their large sizes have made compression using 8-bit character sampling coding methods difficult. At DCC'97, Satoh et al. (1997) reported that the 16-bit character sampling adaptive arithmetic is effective in improving the compression ratio. However, the adaptive compression method does not work well on small sized documents which are produced in the office by groupware and E-mail. The present paper studies a word-based semi-adaptive compression method for Japanese text for the purpose of obtaining good compression performance on various document sizes. The algorithm is composed of two stages. The first stage converts input strings into the word-index numbers (intermediate data) corresponding to the longest matching strings in the dictionary. The second stage reduces the redundancy of the intermediate data. We adopted a 16-bit word-index, and first order context 16-bit sampling PPMC2 (16 bit-PPM) for entropy coding in the second stage.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132280396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Switching between two universal source coding algorithms 在两种通用源编码算法之间切换
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672217
P. Volf, F. Willems
{"title":"Switching between two universal source coding algorithms","authors":"P. Volf, F. Willems","doi":"10.1109/DCC.1998.672217","DOIUrl":"https://doi.org/10.1109/DCC.1998.672217","url":null,"abstract":"This paper discusses a switching method which can be used to combine two sequential universal source coding algorithms. The switching method treats these two algorithms as black-boxes and can only use their estimates of the probability distributions for the consecutive symbols of the source sequence. Three weighting algorithms based on this switching method are presented. Empirical results show that all three weighting algorithms give a performance better than the performance of the source coding algorithms they combine.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116805219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Video coding using vector zerotrees and adaptive vector quantization 使用矢量零树和自适应矢量量化的视频编码
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672275
J. Fowler
{"title":"Video coding using vector zerotrees and adaptive vector quantization","authors":"J. Fowler","doi":"10.1109/DCC.1998.672275","DOIUrl":"https://doi.org/10.1109/DCC.1998.672275","url":null,"abstract":"Summary form only given. We present a new algorithm for intraframe coding of video which combines zerotrees of vectors of wavelet coefficients and the generalized-threshold-replenishment (GTR) technique for adaptive vector quantization (AVQ). A data structure, the vector zerotree (VZT), is introduced to identify trees of insignificant vectors, i.e., those vectors of wavelet coefficients in a dyadic subband decomposition that are to be coded as zero. GTR coders are then applied to each subband to efficiently code the significant vectors by way of adapting to their changing statistics. Both VZT generation anti GTR coding are based upon minimization of criteria involving both rate and distortion. In addition, perceptual performance is improved by invoking simple, perceptually motivated weighting in both the VZT and the GTR coders.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132828011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Packed-TS transform [for image compression] 压缩- ts变换[用于图像压缩]
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672327
Bo Zhang, Yuan F. Zheng
{"title":"Packed-TS transform [for image compression]","authors":"Bo Zhang, Yuan F. Zheng","doi":"10.1109/DCC.1998.672327","DOIUrl":"https://doi.org/10.1109/DCC.1998.672327","url":null,"abstract":"Summary form only given. As a reversible integer wavelet transform, the TS transform gains much attention in both lossless and lossy compression. It is a good approximation to the (2, 6) wavelet, which is one of the best biorthogonal wavelets for image compression, and it can be implemented by only using integer addition/subtraction and shift. Most gray-scale images are in 8-bit form; the TS transform coefficients of them can be represented by 16-bit words, while in most modern computers, 32-bit arithmetic and 16-bit arithmetic have the same speed. We propose a method to speed up the TS transform for image compression. The new algorithm is called the packed-TS transform. The proposed method packs two adjacent pixels in one double-word; therefore, it can make use of the 32-bit computational capability of modern computers to accomplish two additions/subtractions in one instruction cycle. The packed-TS transform is also a reversible transform. In the packed-TS transform, the two adjacent pixels or coefficients are stored in the high-word and the low-word of a double-word, respectively. Then the decomposition/reconstruction is performed on this double-word. We compare the performance of the original TS transform and the proposed packed-TS transform on five images: Girl, Lena, Peppers, Couple and Man, respectively . The experiment shows that the packed-TS transform is faster than the original TS transform by about 30 percent, with a comparable performance in the quality of the reconstructed images.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129302406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compression of unicode files unicode文件的压缩
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672274
P. Fenwick, S. Brierley
{"title":"Compression of unicode files","authors":"P. Fenwick, S. Brierley","doi":"10.1109/DCC.1998.672274","DOIUrl":"https://doi.org/10.1109/DCC.1998.672274","url":null,"abstract":"Summary form only given. The increasing importance of unicode for text files, for example with Java and in some modern operating systems, implies a possible increase of data storage space and data transmission time, with a corresponding need for data compression. However data compressors designed for traditional 8-bit byte data are not necessarily well matched to the peculiarities of unicode data. Different \"standard\" text compression methods behave in different ways, as compared with the performance already known from ASCII or other 8-bit data. A small corpus of unicode files has been compressed on several widely-available text compressors of the various types, confirming that unicode files have different compression characteristics from those known for 8-bit data. Tests with a simple LZ-77 compressor designed to operate in both 8-bit and 16-bit modes indicate that it may be useful to design compressors specifically for unicode data.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117274777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A perceptual preprocessor to segment video for motion estimation 一种用于视频分割运动估计的感知预处理器
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672255
Yi-jen Chiu
{"title":"A perceptual preprocessor to segment video for motion estimation","authors":"Yi-jen Chiu","doi":"10.1109/DCC.1998.672255","DOIUrl":"https://doi.org/10.1109/DCC.1998.672255","url":null,"abstract":"Summary form only given. The objective of motion estimation and motion compensation is to reduce the temporal redundancy between adjacent pictures in a video sequence. Motion estimation is usually performed by calculating an error metric, such as mean absolute error (MAE), for each block in the current frame over a displaced region in the previously reconstructed frame. The motion vector is attained as the displacement having the minimum error metric. Although this achieves minimum-MAE in the residual block, it does not necessarily result in the best perceptual quality since the MAE is not always a good indicator of video quality. In low bit rate video coding, the overhead in sending the motion vectors becomes a significant proportion of the total data rate. The minimum-MAE motion vector may not achieve the minimum joint entropy for coding the residual block and motion vector, and thus may not achieve the best compression efficiency. In this paper, we attack these problems by introducing a perceptual preprocessor which takes advantage of the insensitivity of the human visual system (HVS) to mild changes in pixel intensity in order to segment the video into regions according to the perceptibility of the picture changes. Our preprocessor can exploit the local psycho-perceptual properties of the HVS because it is designed to segment video in the spatio-temporal pixel domain. The associated computational complexity for the segmentation in the spatio-temporal pixel domain is very small. With the information of segmentation, we then determine which macroblocks require motion estimation.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116299992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A fast renormalisation for arithmetic coding 算术编码的快速重整化
Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225) Pub Date : 1998-03-30 DOI: 10.1109/DCC.1998.672314
M. Schindler
{"title":"A fast renormalisation for arithmetic coding","authors":"M. Schindler","doi":"10.1109/DCC.1998.672314","DOIUrl":"https://doi.org/10.1109/DCC.1998.672314","url":null,"abstract":"Summary form only given. All integer based arithmetic coding consists of two steps: proportional range restriction and range expansion (renormalisation). Here a method is presented that significantly reduces the complexity of renormalisation, allowing a speedup of arithmetic coding by a factor of up to 2. The main idea is to treat the output not as a binary number, but as a base 256 (or other) number. This requires less renormalisation and no bitwise operations.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126205203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信