Proceedings DCC '95 Data Compression Conference最新文献

筛选
英文 中文
Trade-off and applications of source-controlled channel decoding to still images 静态图像源控信道解码的权衡与应用
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515553
M. Ruf
{"title":"Trade-off and applications of source-controlled channel decoding to still images","authors":"M. Ruf","doi":"10.1109/DCC.1995.515553","DOIUrl":"https://doi.org/10.1109/DCC.1995.515553","url":null,"abstract":"Summary form only given, as follows. For image transmission, using a new channel decoder, we present improvements in image quality leading to a much more graceful degradation in case of degrading channel conditions. The APRI-SOVA based on the Viterbi algorithm exploits the residual redundancy and correlation in the source bit stream without changing the transmitter. For three different quantizers (applied after a discrete wavelet transform), we show and discuss the trade-off between increasing source coding performance in case of no channel errors (uniform threshold (UT)-generalized Gaussian (GG)-pyramid vector quantizer (PVQ)) and the decreasing improvement using the APRI-SOVA in case of equal error protection (EEP) for noisy channels (PVQ-GG UT). We develop a means to judge the applicability of the APRI-SOVA by considering the remaining correlation of the coded bits (much for the simple UT, little for the complex PVQ), together with a semi-analytical way to calculate the expected improvement. Simulation results for EEP and additive white Gaussian noise show improvements for the LENNA image of up to 1.8 dB (UT), 1.3 dB (GG) in PSNR and no gain for the PVQ, with UT outperforming the other quantizers and thus providing gains of up to 4 dB in PSNR and up to 0.75 dB in E/sub S//N/sub 0/ when choosing the right quantizer. Even greater gains of up to 2.2 dB (UT) in PSNR and 0.5 dB in E/sub S//N/sub 0/ can be encountered when applying combined source and channel coding together with unequal error protection (UEP).","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128043941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Sliding-window compression for PC software distribution 用于PC软件分发的滑动窗口压缩
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515578
T. Yu
{"title":"Sliding-window compression for PC software distribution","authors":"T. Yu","doi":"10.1109/DCC.1995.515578","DOIUrl":"https://doi.org/10.1109/DCC.1995.515578","url":null,"abstract":"Summary form only given, as follows. We study the use of the LZ77 sliding window algorithm to compress PC files for distribution. Since the files need to be compressed only once and expanded many times, one can afford to use a complex compression scheme but must maintain a simple and fast expansion phase. In the experiment we allow the copy-length to be as large as 210 K which is the window buffer size used; this allows the expansion program to run even in the old PC/XT and compatibles. A suffix tree is employed to search for the longest matched length so that the search time is independent of the window size. We employ two methods to encode the displacements and copy-lengths. The first uses a modified unary code to encode the quantities (LZU) while the second method uses Huffman codes to encode them (LZH). Results and comparisons with UNIX's COMPRESS and the PC archive program LHA are tabulated.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133985388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extending Huffman coding for multilingual text compression 扩展多语言文本压缩的霍夫曼编码
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515547
Chi-Hung Chi, Chi-Kwun Kan, Kwok-Shing Cheng, L. Wong
{"title":"Extending Huffman coding for multilingual text compression","authors":"Chi-Hung Chi, Chi-Kwun Kan, Kwok-Shing Cheng, L. Wong","doi":"10.1109/DCC.1995.515547","DOIUrl":"https://doi.org/10.1109/DCC.1995.515547","url":null,"abstract":"Summary form only given. We propose two new algorithms that are based on the 16-bit or 32-bit sampling character set and on the unique features of languages with a large number of distinct characters to improve the data compression ratios for multilingual text documents. We choose Chinese language using 16 bit character sampling as the representative language in our study. The first approach, called the static Chinese Huffman coding, introduces the concept of a single Chinese character in the Huffman tree. Experimental results showed that the improvement in compression ratio obtained. The second approach, called the dictionary-based Chinese Huffman coding, includes the concept of Chinese words in the Huffman coding.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134133807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A high performance block compression algorithm for small systems-software and hardware implementations 一种适用于小型系统的高性能块压缩算法——软件和硬件实现
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515532
A. de la Cruz Nogueiras, M. Gamez Lau, A. Cerdeira Altuzarra, M. Estrada del Cueto, P. Goga
{"title":"A high performance block compression algorithm for small systems-software and hardware implementations","authors":"A. de la Cruz Nogueiras, M. Gamez Lau, A. Cerdeira Altuzarra, M. Estrada del Cueto, P. Goga","doi":"10.1109/DCC.1995.515532","DOIUrl":"https://doi.org/10.1109/DCC.1995.515532","url":null,"abstract":"Summary form only given. A new algorithmic approach to block data compression, using a highly contextual codification of the dictionary, that gives substantial compression-rate advantages over existing technologies, is described. The algorithm takes into account the limitations and characteristics of small systems, such as a low consumption of memory, high speed and short latency, as required by communication applications. It uses a novel construction of the prefix-free dictionary, a simple but powerful heuristic for filtering out the non-compressed symbols and a predictive dynamic prefix coding for the output entities. It also employs universal codification of the integers, allowing a very fast and direct implementation in silicon. A dynamic compression software package is detailed. Also, several techniques developed to maximize the usable disk-space and the software speed, among others, are discussed.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133789436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A derailment-free finite-state vector quantizer with optimized state codebooks 具有优化状态码本的无脱轨有限状态矢量量化器
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515505
X. Ginesta, Seung P. Kim
{"title":"A derailment-free finite-state vector quantizer with optimized state codebooks","authors":"X. Ginesta, Seung P. Kim","doi":"10.1109/DCC.1995.515505","DOIUrl":"https://doi.org/10.1109/DCC.1995.515505","url":null,"abstract":"A new approach to the design of a finite-state vector quantizer (FSVQ) is proposed. FSVQ essentially exploits correlations between adjacent blocks for efficient coding. Previous FSVQ design schemes had ad-hoc features in defining states and resource allocation using equal number of bits for state codebooks regardless of their probabilities of occurrence in a given source. We propose a FSVQ design approach which improves the compression performance by merging states and using variable state-codebook sizes. Another undesirable feature of the FSVQ is a derailment problem which degrades the performance in many practical applications. We propose a structurally constrained state-codebook design approach that eliminates the derailment problem. The performance of the proposed algorithm outperforms previously known FSVQ methods. Further development of the algorithm utilizing mean-removed VQ is described which gives less block artifact even though PSNR is a little bit inferior.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115298467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of single-pass adaptive VQ to bilevel images 单次自适应VQ在二电平图像中的应用
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515533
C. Constantinescu, J. Storer
{"title":"Application of single-pass adaptive VQ to bilevel images","authors":"C. Constantinescu, J. Storer","doi":"10.1109/DCC.1995.515533","DOIUrl":"https://doi.org/10.1109/DCC.1995.515533","url":null,"abstract":"Summary form only given; substantially as follows. Constantinescu and Storer (1994) introduced a new single pass adaptive vector quantization algorithm that maintains a constantly changing dictionary of variable sized rectangles by \"learning\" larger rectangles from smaller ones as an image is processed. For lossy compression of gray scale images, this algorithm with no advance information or training typically at least equals and often exceeds the compression obtained by the JPEG standard for a given quality. All of the authors' past work with this approach has been with lossy compression of images where pixels are 8 or more bits. The present authors provide experimental evidence that their generic single pass adaptive VQ algorithm is highly effective for bilevel images. They examine not only lossless compression, but also very high quality lossy compression as well as mixtures of lossless and lossy compression applied to scanned images that contain text, gray scale images, and line drawings. New distortion measures are introduced for high quality lossy compressed bilevel images. The authors have also experimented with an image that is a mixture of text and gray scale imagery.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128021526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Finite state methods for compression and manipulation of images 压缩和处理图像的有限状态方法
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515504
K. Culík, J. Kari
{"title":"Finite state methods for compression and manipulation of images","authors":"K. Culík, J. Kari","doi":"10.1109/DCC.1995.515504","DOIUrl":"https://doi.org/10.1109/DCC.1995.515504","url":null,"abstract":"Weighted finite automata (WFA) is a tool for specifying real functions and in particular grayscale images. The image compression software based on this algorithm is competitive with other methods in compression of typical grayscale images. It performs particularly well for high compression rates, for color images, and compared to other methods it has several additional advantages. This paper mainly deals with image manipulation. Weighted finite transducers (WFT) can be used to specify the widest variety of image transformations (linear operators on grayness functions). The authors briefly introduce WFA and WFT and give some examples of image transformations specified by WFT.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116187473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Tree-structured vector quantization with significance map for wavelet image coding 基于显著性图的树结构矢量量化小波图像编码
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515493
P. Cosman, S. M. Perlmutter, K. Perlmutter
{"title":"Tree-structured vector quantization with significance map for wavelet image coding","authors":"P. Cosman, S. M. Perlmutter, K. Perlmutter","doi":"10.1109/DCC.1995.515493","DOIUrl":"https://doi.org/10.1109/DCC.1995.515493","url":null,"abstract":"Variable-rate tree-structured VQ is applied to the coefficients obtained from an orthogonal wavelet decomposition. After encoding a vector, we examine the spatially corresponding vectors in the higher subbands to see whether or not they are \"significant\", that is, above some threshold. One bit of side information is sent to the decoder to inform it of the result. When the higher bands are encoded, those vectors which were earlier marked as insignificant are not coded. An improved version of the algorithm makes the decision not to code vectors from the higher bands based on a distortion/rate tradeoff rather than a strict thresholding criterion. Results of this method on the test image \"Lena\" yielded a PSNR of 30.15 dB at 0.174 bits per pixel.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125524661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A speech coding algorithm based on predictive coding 一种基于预测编码的语音编码算法
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515565
S. Kwong, K.F. Man
{"title":"A speech coding algorithm based on predictive coding","authors":"S. Kwong, K.F. Man","doi":"10.1109/DCC.1995.515565","DOIUrl":"https://doi.org/10.1109/DCC.1995.515565","url":null,"abstract":"Summary form only given. A compression algorithm for high quality speech signal using predictive coding techniques is developed. Code-excited linear predictive coding (CELPC) is one of the key techniques to compress speech signal to a bit-rate around 4.8 Kbps. However, due to the heavy computational requirement in the CELPC and speech signals usually can be divided into two portions: namely the based-band and the high-band frequency range. A hybrid CELPC and voice excited linear predictive coding (VELPC) scheme is developed for speech coding to reduce the complexity of the original CELPC. In the algorithm, a speech signal is firstly divided into two portions, the based-band and high-band respectively, in frequency domain, and then the low portion is coded with CELPC and the high-band portion is coded with VELPC. The test experiments showed this new coder can produce synthesized speech with good quality at a better bit rates than the original CELPC. When using the coding methods for the base-band and the high-band signal, we must decide how to divide the speech signal into two portions. In choosing the bandwidth of the base-band signal, there is a trade-off between the coding quality and the bit rate. In our experiment, the bandwidth of the base-band signal is chosen as one fourth of that of the original speech. Subjective evaluation experiments were conducted to test the performance of the hybrid CELPC and VELPC technique. For speech signal sampled at 8 kHz, a bit rate of 4.0 kbps can be achieved with frame intervals of 23 ms. The experimental results showed that the quality of the synthesized speech using hybrid coding technique at the bit rate of 4.0 kbps was almost the same as that of the CELPC at the bit rate of 4.8 kbps.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128765777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Generalized Lempel-Ziv parsing scheme and its preliminary analysis of the average profile 广义Lempel-Ziv解析方案及其对平均剖面的初步分析
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515516
G. Louchard, W. Szpankowski
{"title":"Generalized Lempel-Ziv parsing scheme and its preliminary analysis of the average profile","authors":"G. Louchard, W. Szpankowski","doi":"10.1109/DCC.1995.515516","DOIUrl":"https://doi.org/10.1109/DCC.1995.515516","url":null,"abstract":"The goal of this contribution is twofold: (i) to introduce a generalized Lempel-Ziv parsing scheme, and (ii) to analyze second-order properties of some compression schemes based on the above parsing scheme. We consider a generalized Lempel-Ziv parsing scheme that partitions a sequence of length n into variable phrases (blocks) such that a new block is the longest substring seen in the past by at most b-1 phrases. The case b=1 corresponds to the original Lempel-Ziv scheme. In this paper, we investigate the size of a randomly selected phrase, and the average number of phrases of a given size through analyzing the so called b-digital search tree (b-DST) representation. For a memoryless source, we prove that the size of a typical phrase is asymptotically normally distributed. This result is new even for b=1, and b>1 is a non-trivial extension.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127733254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信