{"title":"Successively refinable trellis coded quantization","authors":"H. Jafarkhani, V. Tarokh","doi":"10.1109/DCC.1998.672134","DOIUrl":"https://doi.org/10.1109/DCC.1998.672134","url":null,"abstract":"We propose successively refinable trellis coded quantizers which are suitable for progressive transmission. A new trellis structure which is scalable is used in the design of our trellis coded quantizers. A hierarchical set partitioning is used to preserve successive refillability. Two algorithms for designing trellis coded quantizers which provide embedded bit streams are provided. The computational complexity of the proposed schemes is compared with that of trellis coded quantization. Simulation results show good performance for memoryless sources.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131202763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PRECIS: A method for fast compression of periodic halftones","authors":"R. Arps, C. Constantinescu","doi":"10.1109/DCC.1998.672234","DOIUrl":"https://doi.org/10.1109/DCC.1998.672234","url":null,"abstract":"Summary form only given. The Periodic Run Edge Compression for Image Systems (PRECIS) is a simple, fast algorithm to compress bitonal images containing periodic halftones. Its compression of periodic halftones doubles and often quadruples the compression obtained using the commonly used MMR algorithm. Its software execution time in compressing periodic halftones has been measured to be as short as half the execution time of MMR. In addition, one software embodiment of a simple version of PRECIS can be implemented simply by clever reuse of building blocks from an existing MMR implementation.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134646281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image transmission using arithmetic coding based continuous error detection","authors":"I. Kozintsev, J. Chou, K. Ramchandran","doi":"10.1109/DCC.1998.672162","DOIUrl":"https://doi.org/10.1109/DCC.1998.672162","url":null,"abstract":"Block cyclic redundancy check (CRC) codes represent a popular and powerful class of error detection techniques in modern data communication systems. Though efficient, CRCs can detect errors only after an entire block of data has been received and processed. We propose a new \"continuous\" error detection scheme using arithmetic coding that provides a novel tradeoff between the amount of added redundancy and the amount of time needed to detect an error once it occurs. We demonstrate how the new error detection framework improves the overall performance of transmission systems, and show how sizeable performance gains can be attained. We focus on two popular scenarios: (i) automatic repeat request (ARQ) based transmission; and (ii) forward error correction frameworks based on (serially) concatenated coding systems involving an inner error-correction code and an outer error-detection code.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132948809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal lossless compression of a class of dynamic sources","authors":"J. Reif, J. Storer","doi":"10.1109/DCC.1998.672221","DOIUrl":"https://doi.org/10.1109/DCC.1998.672221","url":null,"abstract":"The usual assumption for proofs of the optimality of lossless encoding is a stationary ergodic source. Dynamic sources with non-stationary probability distributions occur in many practical situations where the data source is constructed by a composition of distinct sources, for example, a document with multiple authors, a multimedia document, or the composition of distinct packets sent over a communication channel. There is a vast literature of adaptive methods used to tailor the compression to dynamic sources. However, little is known about optimal or near optimal methods for lossless compression of strings generated by sources that are not stationary ergodic. We present a number of asymptotically efficient algorithms that address, at least from the theoretical point of view, optimal lossless compression of dynamic sources. We assume the source produces an infinite sequence of concatenated finite strings generated by sampling a stationary ergodic source.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123795156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Salami, H. Sakanashi, Masaharu Tanaka, M. Iwata, Takio Kurita, T. Higuchi
{"title":"On-line compression of high precision printer images by evolvable hardware","authors":"M. Salami, H. Sakanashi, Masaharu Tanaka, M. Iwata, Takio Kurita, T. Higuchi","doi":"10.1109/DCC.1998.672150","DOIUrl":"https://doi.org/10.1109/DCC.1998.672150","url":null,"abstract":"This paper describes an image compression system based on evolvable hardware (EHW) for high precision printers (HPP). These printers are especially flexible for book publishing, but require large disk space for images, in particular those of higher resolution. To increase the printing speed and reduce the disk space, the images should be compressed. The system for this compression must be (1) adaptive, so that it changes depending on image characteristics and (2) on-line, which means implemented in hardware. The standard compression methods have a simple template change strategy which is not efficient for the images of HPP. We used an EHW system for compressing HPP images in real time. The EHW is a type of adaptive hardware which allows evolutionary algorithms to change the hardware configuration in real time. It works as fast as other compression systems (like the JBIG standard), but changes the image modeling to reflect the changes in the image characteristics. Simulation results show more than a 50% increase in compression ratio compared to JBIG for the printer system.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127247248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Turbo decoding of hidden Markov sources with unknown parameters","authors":"J. Garcia-Frías, J. Villasenor","doi":"10.1109/DCC.1998.672143","DOIUrl":"https://doi.org/10.1109/DCC.1998.672143","url":null,"abstract":"We describe techniques for joint source-channel coding of hidden Markov sources using a modified turbo decoding algorithm. This avoids the need to perform any explicit source coding prior to transmission, and instead allows the decoder to utilize the a priori structure due to the hidden Markov source. In addition, we present methods that allow the decoder to estimate the parameters of the Markov model. In combination, these techniques allow the decoder to identify, estimate, and exploit the source structure. The estimation does not degrade the performance of the system, i.e. the joint estimation/decoding allows convergence at the same noise levels as a system in which the decoder has perfect a priori knowledge of the source parameters.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115246112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint source/channel coding for variable length codes","authors":"K. Sayood, H. Otu, Nejat Demir","doi":"10.1109/DCC.1998.672140","DOIUrl":"https://doi.org/10.1109/DCC.1998.672140","url":null,"abstract":"When using entropy coding over a noisy channel it is customary to protect the highly vulnerable bitstream with an error correcting code. In this paper we propose a technique which utilizes the residual redundancy at the output of the source coder to provide error protection for entropy coded systems. The proposed approach provides 4-10 dB improvement over the standard approaches at a reduced rate.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116160950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A scalable entropy code","authors":"T. Verma, T. Meng","doi":"10.1109/DCC.1998.672323","DOIUrl":"https://doi.org/10.1109/DCC.1998.672323","url":null,"abstract":"Summary form only given. We present an algorithm for constructing entropy codes that allow progressive transmission. The algorithm constructs codes by forming an unbalanced tree in a similar to fashion to Huffman coding. It differs, however, in that nodes are combined in a rate-distortion sense. Because nodes are formed with both rate and distortion in mind, each internal tree node, in addition to each leaf node, has a reconstruction vector and a path map, or codeword, associated with it. The code associated with the leaf nodes is a lossless, asymptotically optimal (for many sources), prefix code. The codes associated with internal nodes are lossy prefix codes, but have lower average length than the lossless code. Using codes associated with the tree and pruned subtrees, an encoded source can be reconstructed with higher fidelity as more bits become available therefore allowing a successive approximation character. In addition, because the lossless code is asymptotically optimal for many sources, the the cost of using the lossless progressive code can be made arbitrarily small for these sources.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115014617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lossless interframe image compression via context modeling","authors":"Xiaolin Wu, W. K. Choi, N. Memon","doi":"10.1109/DCC.1998.672169","DOIUrl":"https://doi.org/10.1109/DCC.1998.672169","url":null,"abstract":"In this paper, we present an interband version of CALIC (context-based adaptive lossless image codec), a lossless image coding technique. It is demonstrated that CALIC's techniques of context-based modeling of images lend themselves easily to modeling of image sequences. The generalized interframe CALIC can exploit both interframe and intraframe statistical redundancies, and obtain significant compression gains over intraframe CALIC. The advantage of interframe CALIC is demonstrated by experimental results on different types of multispectral images.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114061151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The H.263+ video coding standard: complexity and performance","authors":"B. Erol, M. Gallant, G. Côté, F. Kossentini","doi":"10.1109/DCC.1998.672154","DOIUrl":"https://doi.org/10.1109/DCC.1998.672154","url":null,"abstract":"The emerging ITU-T H.263+ low bit-rate video coding standard is version 2 of the draft international standard ITU-T H.263. In this paper, we discuss this emerging video coding standard and present compression performance results based on our public domain implementation of H.263+.","PeriodicalId":191890,"journal":{"name":"Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128351902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}