{"title":"Classified variable rate residual vector quantization applied to image subband coding","authors":"C. Barnes, E. J. Holder","doi":"10.1109/DCC.1993.253122","DOIUrl":"https://doi.org/10.1109/DCC.1993.253122","url":null,"abstract":"The linear growth with the dimension-rate product of RVQ computation and memory requirements permits practical implementations of RVQs with large dimensions or high rates. This feature is exploited by quantizing low-resolution subbands with small-dimension high-rate RVQs, and high-resolution subbands with large-dimension low-rate RVQs. The RVQ vector sizes vary by a factor of four in parallel with the decimation and up-sampling processes from one resolution level to the next. Two forms of rate allocation are achieved with the RVQ subband system. A type of concentric shell partitioned vector classifier with side information is used to separate noise-like subband vectors from structured subband vectors. For the large-dimension low-rate RVQs, variable rate RVQ with side information permits different numbers of RVQ stages to be used on different vectors within a concentric shell partition class.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125285835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Segmentation-based progressive image coding","authors":"Xiaolin Wu, Yonggang Fang","doi":"10.1109/DCC.1993.253123","DOIUrl":"https://doi.org/10.1109/DCC.1993.253123","url":null,"abstract":"The authors present, as an image pyramid, an adaptive, tree-structured segmentation to facilitate progressive transmission. The new image pyramid is semantically more powerful than regular tessellations while syntactically simpler than free segmentation. This good compromise between adaptability and complexity is a key to the high compression ratios achieved by the proposed image coder. The performance is further enhanced by exploiting the statistical dependency between layers of the image pyramid.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124165876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Universality and rates of convergence in lossy source coding","authors":"T. Linder, G. Lugosi, K. Zeger","doi":"10.1109/DCC.1993.253141","DOIUrl":"https://doi.org/10.1109/DCC.1993.253141","url":null,"abstract":"The authors show that without knowing anything about the statistics of a bounded real-valued memoryless source, it is possible to construct a sequence of codes, of rate not exceeding a fixed number R>0, such that the per-letter sample distortion converges to the distortion-rate function D(R) with probability one as the length of the message approaches infinity. It is proven that the distortion converges to D(R) as square root log log n/log n almost surely, where n is the length of the data to be transmitted.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127092102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and performance of tree-structured vector quantizers","authors":"Jianhua Lin, J. Storer","doi":"10.1109/DCC.1993.253120","DOIUrl":"https://doi.org/10.1109/DCC.1993.253120","url":null,"abstract":"This paper considers optimal vector quantizers which minimize the expected distortion subject to a cost such as the number of leaves (storage cost), the leaf entropy (lossless encoding rate), the expected depth (average quantization time), or the maximum depth (maximum quantization time). It analyzes the heuristic of successive partitioning, and develops a class of strategies subsuming most of those used in the past. Experimental results show that these strategies are more efficient than existing methods, and achieve comparable or better compression. The relationship among different cost functions is considered and ways of combining multiple cost constraints are proposed.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126546413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visually optimal DCT quantization matrices for individual images","authors":"A. Watson","doi":"10.1109/DCC.1993.253132","DOIUrl":"https://doi.org/10.1109/DCC.1993.253132","url":null,"abstract":"A custom quantization matrix tailored to a particular image is designed by an image-dependent perceptual method incorporating solutions to the problems of luminance and contrast masking, error pooling and quality selectability.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117347847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can random fluctuation be exploited in data compression?","authors":"I. K. R. Rao, M. D. Patil","doi":"10.1109/DCC.1993.253143","DOIUrl":"https://doi.org/10.1109/DCC.1993.253143","url":null,"abstract":"Much of compression theory assumes knowledge of exact statistics of the alphabet being encoded. In practice, codes are often based on approximations of true statistics. This paper examines the consequences of random fluctuations on coding efficiency. It shows that exact statistics permit more efficient encoding, but when the error is due to random fluctuation, the savings are small and of magnitude of the extra table needed for decoding.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132728480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An improved sequential search multistage vector quantizer","authors":"David J. Miller, K. Rose","doi":"10.1109/DCC.1993.253149","DOIUrl":"https://doi.org/10.1109/DCC.1993.253149","url":null,"abstract":"A new structure permits improved solutions which approximate the exhaustive-search multistage solution. A deterministic annealing design method capitalizing on this structure is formulated within the framework of information theory. The sequential search constraint is included as a prior, and the principal of minimum cross entropy is invoked. The method obtains improvement over both the standard sequential design and joint optimization approaches.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126956785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An embedded hierarchical image coder using zerotrees of wavelet coefficients","authors":"J. M. Shapiro","doi":"10.1109/DCC.1993.253128","DOIUrl":"https://doi.org/10.1109/DCC.1993.253128","url":null,"abstract":"This paper describes a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance. A fully embedded code represents a sequence of binary decisions that distinguish an image from the 'null' image. Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly. Also, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. The algorithm consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images, but requires absolutely no training, no pre-stored tables or codebooks, and no prior knowledge of the image source. It is based on four key concepts: (1) wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression achieved via adaptive arithmetic coding.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129798903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sort order preserving data compression for extended alphabets","authors":"A. Zandi, B. Iyer, G. Langdon","doi":"10.1109/DCC.1993.253116","DOIUrl":"https://doi.org/10.1109/DCC.1993.253116","url":null,"abstract":"The compression method is based on composing phrases from symbols. The authors extend the sort-order property to parsing models, i.e. to Variable-to-Fixed Length codes, or a static Ziv-Lempel algorithm, or alternatively a Tunstall algorithm for an adjoint source. The parsed phrases comprising the original storage data units have the same position in the sort ordering as the original units themselves. The VFL result may be further compressed by use of Variable-to-Variable Length techniques based on the relative frequencies of the parsed phrases. The sort-order property is facilitated by an 'end of record' symbol and requires a new zilch symbol.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128513996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Codes with monotonic codeword lengths","authors":"J. Abrahams","doi":"10.1109/DCC.1993.253145","DOIUrl":"https://doi.org/10.1109/DCC.1993.253145","url":null,"abstract":"The author studies minimum average codeword length coding under the constraint that the codewords are monotonically non-decreasing in length. She derives bounds on the average length of an optimal 'monotonic' code, and gives sufficient conditions such that algorithms for optimal alphabetic codes can be used to find the optimal 'monotonic' code.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116071093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}