Proceedings DCC '95 Data Compression Conference最新文献

筛选
英文 中文
A multi-dimensional measure for image quality 图像质量的多维度量
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515579
A. Eskicioglu
{"title":"A multi-dimensional measure for image quality","authors":"A. Eskicioglu","doi":"10.1109/DCC.1995.515579","DOIUrl":"https://doi.org/10.1109/DCC.1995.515579","url":null,"abstract":"Summary form only given. It is necessary to develop a quality measure that is capable of determining (1) the amount of degradation, (2) the type of degradation, and (3) the impact of compression on different frequency ranges, in a reconstructed image. We discuss the development of a new graphical measure based on three criteria. To be able to make a local error analysis, we first divide a given image (the original or a degraded) into areas with certain activity levels using, as in the case of Hosaka plots, a quadtree decomposition. The largest and smallest block sizes in our decomposition scheme are 16 and 2, respectively. This gives us 4 classes of blocks having the same size. Class i represents the collection of i/spl times/i blocks; a higher value of i denotes a lower frequency area of the image. After obtaining the quadtree decomposition for a specified value of the variance threshold, we compute three values for each class i (i=2,4,8,16), and normalize them according to: (1) the number of pixels/the number of pixels in the entire image; (2) the number of distinct pixel values/the number of possible pixel values; and (3) the average of the standard deviations in the blocks/a preset maximum standard deviation. The essential characteristics of the image are then displayed in a normalized bar chart. This lays the foundations for designing optimized image coders.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133608207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Quantization of overcomplete expansions 过完全展开的量化
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515491
Vivek K Goyal, M. Vetterli, N. T. Thao
{"title":"Quantization of overcomplete expansions","authors":"Vivek K Goyal, M. Vetterli, N. T. Thao","doi":"10.1109/DCC.1995.515491","DOIUrl":"https://doi.org/10.1109/DCC.1995.515491","url":null,"abstract":"We present a method that represents a signal with respect to an overcomplete set of vectors which we call a dictionary. The use of overcomplete sets of vectors (redundant bases or frames) together with quantization is explored as an alternative to transform coding for signal compression. The goal is to retain the computational simplicity of transform coding while adding flexibility like adaptation to signal statistics. We show results using both fixed quantization in frames and greedy quantization using matching pursuit. An MSE slope of -6 dB/octave of frame redundancy is shown for a particular tight frame and is verified experimentally for another frame.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115067141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Context models in the MDL framework MDL框架中的上下文模型
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515496
E. Ristad, Robert G. Thomas
{"title":"Context models in the MDL framework","authors":"E. Ristad, Robert G. Thomas","doi":"10.1109/DCC.1995.515496","DOIUrl":"https://doi.org/10.1109/DCC.1995.515496","url":null,"abstract":"Current approaches to speech and handwriting recognition demand a strong language model with a small number of states and an even smaller number of parameters. We introduce four new techniques for statistical language models: multicontextual modeling, nonmonotonic contexts, implicit context growth, and the divergence heuristic. Together these techniques result in language models that have few states, even fewer parameters, and low message entropies. For example, our techniques achieve a message entropy of 2.16 bits/char on the Brown corpus using only 19374 contexts and 54621 parameters. Multicontextual modeling and nonmonotonic contexts, are generalizations of the traditional context model. Implicit context growth ensures that the state transition probabilities of a variable-length Markov process are estimated accurately. This technique is generally applicable to any variable-length Markov process whose state transition probabilities are estimated from string frequencies. In our case, each state in the Markov process represents a context, and implicit context growth conditions the shorter contexts on the fact that the longer contexts did not occur. In a traditional unicontext model, this technique reduces the message entropy of typical English text by 0.1 bits/char. The divergence heuristic, is a heuristic estimation algorithm based on Rissanen's (1978, 1983) minimum description length (MDL) principle and universal data compression algorithm.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129747861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Bitgroup modeling of signal data for image compression 图像压缩信号数据的位组建模
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515576
J. Vaisey, Mark Trumbo
{"title":"Bitgroup modeling of signal data for image compression","authors":"J. Vaisey, Mark Trumbo","doi":"10.1109/DCC.1995.515576","DOIUrl":"https://doi.org/10.1109/DCC.1995.515576","url":null,"abstract":"Summary form only given. Binary variable order adaptive algorithms like the UMC of Rissanen (1986) and JBIG can be used to losslessly compress non-binary data by splitting the data into planes, each of 1 bit resolution, and passing each plane to a separate instance of the algorithm. The UMC algorithm operated in this way is the most powerful lossless signal data compressor the authors are aware of. We attempt to develop an understanding of why this approach is so effective. We investigate the common technique of Gray coding the data before splitting it into single-bit planes and passing to the modeler and coder, and compare it to a simple weighted binary coding. We then propose a non-binary pseudo-Gray code as a method of generating planes of resolution greater than or equal to 1 bit, and compare it with the other conventional methods. The algorithm to generate the pseudo-Gray code is much the same as that for the construction of a binary Gray code, except that instead of minimizing the Hamming distance between neighboring bit planes, we instead minimize the Euclidean distance between adjacent groups of bit planes.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130010762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A tree based binary encoding of text using LZW algorithm 使用LZW算法的基于树的文本二进制编码
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515573
T. Acharya, A. Mukherjee
{"title":"A tree based binary encoding of text using LZW algorithm","authors":"T. Acharya, A. Mukherjee","doi":"10.1109/DCC.1995.515573","DOIUrl":"https://doi.org/10.1109/DCC.1995.515573","url":null,"abstract":"Summary form only given. The most popular adaptive dictionary coding scheme used for text compression is the LZW algorithm. In the LZW algorithm, a changing dictionary contains common strings that have been encountered so far in the text. The dictionary can be represented by a dynamic trie. The input text is examined character by character and the longest substring (called a prefix string) of the text which already exists in the trie, is replaced by a pointer to a node in the trie which represents the prefix string. Motivation of our research is to explore a variation of the LZW algorithm for variable-length binary encoding of text (we call it the LZWA algorithm) and to develop a memory-based VLSI architecture for text compression. We proposed a new methodology to represent the trie in the form of a binary tree (we call it a binary trie) to maintain the dictionary used in the LZW scheme. This binary tree maintains all the properties of the trie and can easily be mapped into memory. As a result, the common substrings can be encoded using variable length prefix binary codes. The prefix codes enable us to uniquely decode the text in its original form. The algorithm outperforms the usual LZW scheme when the size of the text is small (usually less than 5 K). Depending upon the characteristics of the text, the improvement of the compression ratio has been achieved around 10-30% compared to the LZW scheme. But its performance degrades for larger size texts.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"160 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128972985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Operations on compressed image data 对压缩图像数据的操作
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515542
J. Kanai, S. Latifi, G. Rajarathinam, G. Nagy, H. Bunke
{"title":"Operations on compressed image data","authors":"J. Kanai, S. Latifi, G. Rajarathinam, G. Nagy, H. Bunke","doi":"10.1109/DCC.1995.515542","DOIUrl":"https://doi.org/10.1109/DCC.1995.515542","url":null,"abstract":"A formal framework of directly processing the encoded data is presented. Image operations which can be directly and efficiently applied on run-length encoded data are identified. The FSM and attributed FSM models are used to describe these operations.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"23 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125686828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Vector quantization for lossless textual data compression 矢量量化无损文本数据压缩
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515584
W. K. Ng, C. Ravishankar
{"title":"Vector quantization for lossless textual data compression","authors":"W. K. Ng, C. Ravishankar","doi":"10.1109/DCC.1995.515584","DOIUrl":"https://doi.org/10.1109/DCC.1995.515584","url":null,"abstract":"Summary form only given. Vector quantisation (VQ) may be adapted for lossless data compression if the data exhibit vector structures, such as in textural relational databases. Lossless VQ is discussed and it is demonstrated that a relation of tuples may be encoded and allocated to physical disk blocks such that standard database operations such as access, insertion, deletion, and update may be fully supported.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130761452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimal linear prediction for the lossless compression of volume data 体数据无损压缩的最优线性预测
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515568
J. Fowler, R. Yagel
{"title":"Optimal linear prediction for the lossless compression of volume data","authors":"J. Fowler, R. Yagel","doi":"10.1109/DCC.1995.515568","DOIUrl":"https://doi.org/10.1109/DCC.1995.515568","url":null,"abstract":"Summary form only given. Data in volume form consumes an extraordinary amount of storage space. For efficient storage and transmission of such data, compression algorithms are imperative. However, most volumetric data sets are used in biomedicine and other scientific applications where lossy compression is unacceptable. We present a lossless data compression algorithm which uses optimal linear prediction to exploit correlations in all three dimensions. Our algorithm is a combination of differential pulse-code modulation (DPCM) and Huffman coding and results in compression of around 50% for a set of volume data files. The compression algorithm was run with each of the different predictors on a set of volumes consisting of MRI images, CT images, and electron-density map data.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129264430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Experiments on the zero frequency problem 零频率问题的实验研究
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515590
J. Cleary, W. Teahan
{"title":"Experiments on the zero frequency problem","authors":"J. Cleary, W. Teahan","doi":"10.1109/DCC.1995.515590","DOIUrl":"https://doi.org/10.1109/DCC.1995.515590","url":null,"abstract":"Summary form only given. A fundamental problem in the construction of statistical techniques for data compression of sequential text is the generation of probabilities from counts of previous occurrences. Each context used in the statistical model accumulates counts of the number of times each symbol has occurred in that context. So in a binary alphabet there will be two counts C/sub 0/ and C/sub 1/ (the number of times a 0 or 1 has occurred). The problem then is to take the counts and generate from them a probability that the next character will be a 0 or 1. A naive estimate of the probability of character i could be obtained by the ratio p/sub i/=C/sub i//(C/sub 0/+C/sub i/). A fundamental problem with this is that it will generate a zero probability if C/sub 0/ or C/sub 1/ is zero. Unfortunately, a zero probability prevents coding from working correctly as the \"optimum\" code length in this case is infinite. Consequently any estimate of the probabilities must be non-zero even in the presence of zero counts. This problem is called the zero frequency problem . A well known solution to the problem was formulated by Laplace and is known as Laplace's law of succession. We have investigated the correctness of Laplace's law by experiment.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125588622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Adaptive image quantization based on learning classifier systems 基于学习分类器系统的自适应图像量化
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515587
Jianhua Lin
{"title":"Adaptive image quantization based on learning classifier systems","authors":"Jianhua Lin","doi":"10.1109/DCC.1995.515587","DOIUrl":"https://doi.org/10.1109/DCC.1995.515587","url":null,"abstract":"Summary form only given. The performance of a quantizer depends primarily on the selection of a codebook. Most of the quantization techniques used in the past are based on a static codebook which stays unchanged for the entire input. As already demonstrated successfully in lossless data compression, adaptation can be very beneficial in the compression of typically changing input data. Adaptive quantization has been difficult to accomplish because of its lossy nature. We present a model for distribution-free adaptive image quantization based on learning classifier systems which have been used successfully in machine learning. A basic learning classifier system is a special type of message-processing, rule-based system that produces output according to its input environment. Probabilistic learning mechanisms are used to dynamically direct the behavior of the system to adapt to its environment. The adaptiveness of a learning classifier system seems very appropriate for the quantization problem. A learning classifier system based adaptive quantizer consists of the input data, a codebook, and the output. When an input can not be matched, a new codebook entry is constructed to match the input. Such an algorithm allows us not only to deal with the changing environment, but also to control the quality of the quantized output. The adaptive quantizers presented can be applied to both scalar quantization and vector quantization. Experimental results for each case in image quantization are very promising.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123829818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信