{"title":"Iterated denoising for image recovery","authors":"O. Guleryuz","doi":"10.1109/DCC.2002.999938","DOIUrl":"https://doi.org/10.1109/DCC.2002.999938","url":null,"abstract":"We propose an algorithm for image recovery where completely lost blocks in an image/video-frame are recovered using spatial information surrounding these blocks. Our primary application is on lost regions of pixels containing textures, edges and other image features that are not readily handled by prevalent recovery and error concealment algorithms. The proposed algorithm is based on the iterative application of a generic denoising algorithm and it does not necessitate any complex preconditioning, segmentation, or edge detection steps. Utilizing locally sparse linear transforms and overcomplete denoising, we obtain good PSNR performance in the recovery of such regions. In addition to results on image recovery, the paper provides further insights into the usefulness of popular transforms like wavelets, wavelet packets, discrete cosine transform (DCT) and complex wavelets in providing sparse image representations.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"19 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134222494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PPM: one step to practicality","authors":"D. A. Shkarin","doi":"10.1109/DCC.2002.999958","DOIUrl":"https://doi.org/10.1109/DCC.2002.999958","url":null,"abstract":"PPM is one of the most promising lossless data compression algorithms using Markov source model of order D. Its main essence is the coding of a new (in the given context) symbol in one of inner nodes of the context tree; a sequence of the special escape symbols is used to describe this node. In reality, the majority of symbols is encoded in inner nodes and the Markov model becomes rather conventional. In spite of the fact that the PPM algorithm achieves the best results in comparison with others, it is used rarely in practical applications due to its high computational complexity. This paper is devoted to the PPM algorithm implementation that has a complexity comparable with widespread practical compression schemes based on LZ77, LZ78 and BWT algorithms. This scheme has been proposed by Shkarin (see Problems of Information Transmission, vol.34, no.3, p.44-54, 2001) and named PPM with information inheritance (PPMII).","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132616853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational complexity management of motion estimation in video encoders","authors":"Yafan Zhao, I. Richardson","doi":"10.1109/DCC.2002.1000026","DOIUrl":"https://doi.org/10.1109/DCC.2002.1000026","url":null,"abstract":"Summary form only given. The performance of software-only video codecs is often constrained by available processing power. Existing fast motion estimation algorithms are not designed to provide flexible, predictable control of computational complexity. We propose an adaptive algorithm, which maintains the computational complexity of the motion estimation function at various target levels by controlling the motion estimation search pattern.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133676460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Less redundant codes for variable size dictionaries","authors":"Zhen Yao, N. Rajpoot","doi":"10.1109/DCC.2002.1000024","DOIUrl":"https://doi.org/10.1109/DCC.2002.1000024","url":null,"abstract":"Summary form only given. We report on a family of variable-length codes with less redundancy than the flat code used in most of the variable size dictionary-based compression methods. The length of codes belonging to this family is still bounded above by [log/sub 2/ |D|] where |D| denotes the dictionary size. We describe three of these codes, namely, the balanced code, the phase-in-binary code (PB), and the depth-span code (DS). As the name implies, the balanced code is constructed by a height balanced tree, so it has the shortest average codeword length. The corresponding coding tree for the PB code has an interesting property that it is made of full binary phases, and thus the code can be computed efficiently using simple binary shifting operations. The DS coding tree is maintained in such a way that the coder always finds the longest extendable codeword and extends it until it reaches the maximum length. It is optimal with respect to the code-length contrast. The PB and balanced codes have almost similar improvements, around 3% to 7% which is very close to the relative redundancy in flat code. The DS code is particularly good in dealing with files with a large amount of redundancy, such as a running sequence of one symbol. We also did some empirical study on the codeword distribution in the LZW dictionary and proposed a scheme called dynamic block shifting (DBS) to further improve the codes' performance. Experiments suggest that the DBS is helpful in compressing random sequences. From an application point of view, PB code with DBS is recommended for general practical usage.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114284435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compression techniques for active video content","authors":"A. Neogi, T. Chiueh","doi":"10.1109/DCC.2002.1000009","DOIUrl":"https://doi.org/10.1109/DCC.2002.1000009","url":null,"abstract":"Summary for only given. Conventional digital video playback systems provide only limited user interactivity, mostly in the form of VCR-like controls. In this model, the temporal ordering and the spatial viewpoints of the video streams being viewed are completely determined at authoring time. In contrast, we have defined a form of interactive video called active video (see http://www.ecsl.cs.sunysb.edu//spl sim/anindya/avs/avs.html, 2002), which supports hyper-linking among related video sequences and interpolation of video sequences with neighboring viewpoints, to offer end users the additional flexibility of choosing the sequencing and the viewing angle (even virtual ones) at playback time. However, active video has a substantially higher storage and transmission cost due to multiple time-synchronized video streams capturing the dynamic scene and the pixel-level correspondence maps encoding the spatial association among the frame-pairs of all adjacent streams. The maps are interpolated at run-time to generate virtual views. We describe and evaluate the following three compression techniques that alleviate the storage and network transmission costs of active video: spatial video compression; lossy map compression; lossless map compression.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114426549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Zero-error source coding with maximum distortion criterion","authors":"E. Tuncel, P. Koulgi, S. Regunathan, K. Rose","doi":"10.1109/DCC.2002.999947","DOIUrl":"https://doi.org/10.1109/DCC.2002.999947","url":null,"abstract":"Let finite source and reproduction alphabets X and Y and a distortion measure d: X/spl times/Y/spl rarr/[0,/spl infin/) be given. We study the minimum asymptotic rate required to describe a source distributed over X within a (given) distortion threshold D at every sample. The problem is hence a min-max problem, and the distortion measure is extended to vectors as follows: for x/sup n//spl isin/X/sup n/, y/sup n//spl isin/Y/sup n/, d(x/sup n/, y/sup n/)=max/sub i/d(x/sub i/, y/sub i/). In the graph-theoretic formulation we introduce, a code for the problem is a dominating set of an equivalent distortion graph. We introduce a linear programming lower bound for the minimum dominating set size of an arbitrary graph, and show that this bound is also the minimum asymptotic rate required for the corresponding source. Turning then to the optimality of scalar coding, we show that scalar codes are asymptotically optimal if the underlying graph is either an interval graph or a tree.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"449 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117112301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Progressive image communication over binary channels with additive bursty noise","authors":"F. Behnamfar, F. Alajaji, T. Linder","doi":"10.1109/DCC.2002.999965","DOIUrl":"https://doi.org/10.1109/DCC.2002.999965","url":null,"abstract":"A progressive method for transmission of images over a bursty noise channel is presented. It is based on discrete wavelet transform (DWT) coding and channel-optimized scalar quantization. The main advantage of the proposed system is that it exploits the channel memory and hence has superior performance over a similar scheme designed for the equivalent memoryless channel through the use of channel interleaving. In fact, the performance of the proposed system improves as the noise becomes more correlated, at a fixed bit error rate. Comparisons are made with other alternatives which employ independent source and channel coding over the fully interleaved channel at various bit rates and bit error rates. It is shown that the proposed method outperforms these substantially more complex systems for the whole range of considered bit rates and for a wide range of channel conditions.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115848072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved behaviour of tries by the \"symmetrization\" of the source","authors":"Y. Reznik, W. Szpankowski","doi":"10.1109/DCC.2002.999975","DOIUrl":"https://doi.org/10.1109/DCC.2002.999975","url":null,"abstract":"In this paper, we propose and study a pre-processing technique for improving performance of digital tree (trie)-based search algorithms under asymmetric memoryless sources. This technique (which we call a symmetrization of the source) bijectively maps the sequences of symbols from the original (asymmetric) source into symbols of an output alphabet resulting in a more uniform distribution. We introduce a criterion of efficiency for such a mapping, and demonstrate that a problem of finding an optimal construction for a given source (or universal) symmetrization transform is equivalent to a problem of constructing a minimum redundancy variable-length-to-block code for this source (or class of sources). Based on this result, we propose search algorithms that incorporate known (optimal for a given source and universal) variable-length-to-block codes and study their asymptotic behaviour. We complement our analysis with a description of an efficient algorithm for universal symmetrization of binary memoryless sources, and compare the performance of the resulting search structure with the standard tries.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130353743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low bit rate image coding in the scale space","authors":"Xin Li","doi":"10.1109/DCC.2002.999941","DOIUrl":"https://doi.org/10.1109/DCC.2002.999941","url":null,"abstract":"Scale-space representation has been extensively studied in the computer vision community for analyzing image structures at different scales. This paper borrows and develops useful mathematical tools from scale-space theory to facilitate the task of image compression. Instead of compressing the original image directly, we propose to compress its scale-space representation obtained by the forward diffusion with a Gaussian kernel at the chosen scale. The major contribution of this work is a novel solution to the ill-posed inverse diffusion problem. We analytically derive a nonlinear filter to deblur Gaussian blurring for 1D ideal step edges. The generalized 2D edge enhancing filter only requires the knowledge of local minimum/maximum and preserves the geometric constraint of edges. When combined with a standard wavelet-based image coder, the forward and inverse diffusion can be viewed as a pair of pre-processing and post-processing stages used to select and preserve important image features at the given bit rate. Experiment results have shown that the proposed diffusion-based techniques can dramatically improve the visual quality of reconstructed images at low bit rate (below 0.25bpp).","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127287266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low-complexity lossless and fine-granularity scalable near-lossless compression of color images","authors":"R. van der Vleuten","doi":"10.1109/DCC.2002.1000020","DOIUrl":"https://doi.org/10.1109/DCC.2002.1000020","url":null,"abstract":"Summary form only given. We present a method that extends lossless compression with the feature of fine-granularity scalable near lossless compression while preserving the high compression efficiency and low complexity exhibited by dedicated lossless compression methods when compared to the scalable compression methods developed for lossy image compression. The method operates by splitting the image pixel values into their most significant bits (MSB) and least significant bits (LSB). The MSB are losslessly compressed by a dedicated lossless compression method (e.g. JPEG-LS). The LSB are compressed by a scalable encoder, i.e. in such a way that their description may be truncated at any desired point. We also present a method to automatically and adaptively determine the MSB/LSB split point such that a scalable bit string is obtained without affecting the compression efficiency and without producing compression artefacts for near-lossless compression. To determine the split point, first a low complexity DPCM-type prediction is carried out on the original pixel values to obtain the prediction error signal. Next, the split point is computed from the average value of the magnitude of this signal. Finally, applying a (lossless) color transform to decorrelate the image color components before compressing them provides a higher (lossless) compression ratio.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"386 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117266982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}