{"title":"A low-power analog CMOS vector quantizer","authors":"G. Tuttle, S. Fallahi, A. Abidi","doi":"10.1109/DCC.1993.253108","DOIUrl":"https://doi.org/10.1109/DCC.1993.253108","url":null,"abstract":"The authors describe the implementation and performance of what might be termed a 'Vector A/D Converter'. The IC stores a codebook of vectors on-chip, accepts a 16-element analog vector at the input, calculates the Euclidean distance between the input and all codevectors (referred to as global search), and outputs an 8-bit code to index the codevector closest to the input prompt. At a 5 MHz clock rate it dissipates less than 50 mW to quantize 16 element analog vectors once every 10 clock periods, giving a 30 Hz frame rate for a 512*512 pixel gray scale image.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125617073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combining image classification and image compression using vector quantization","authors":"K. Oehler, R. Gray","doi":"10.1109/DCC.1993.253150","DOIUrl":"https://doi.org/10.1109/DCC.1993.253150","url":null,"abstract":"The goal is to produce codes where the compressed image incorporates classification information without further signal processing. This technique can provide direct low level classification or an efficient front end to more sophisticated full-frame recognition algorithms. Vector quantization is a natural choice because two of its design components, clustering and tree-structured classification methods, have obvious applications to the pure classification problem as well as to the compression problem. The authors explicitly incorporate a Bayes risk component into the distortion measure used for code design in order to permit a tradeoff of mean squared error with classification error. This method is used to analyze simulated data, identify tumors in computerized tomography lung images, and identify man-made regions in aerial images.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"188 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124138097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust, variable bit-rate coding using entropy-biased codebooks","authors":"J. Fowler, S. Ahalt","doi":"10.1109/DCC.1993.253113","DOIUrl":"https://doi.org/10.1109/DCC.1993.253113","url":null,"abstract":"The authors demonstrate the use of a differential vector quantization (DVQ) architecture for the coding of digital images. An artificial neural network is used to develop entropy-biased codebooks which yield substantial data compression without entropy coding and are very robust with respect to transmission channel errors. Two methods are presented for variable bit-rate coding using the described DVQ algorithm. In the first method, both the encoder and the decoder have multiple codebooks of different sizes. In the second, variable bit-rates are achieved by using subsets of one fixed codebook. The performance of these approaches is compared, under conditions of error-free and error-prone channels. Results show that this coding technique yields pictures of excellent visual quality at moderate compression rate.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115064135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal piecewise-linear compression of images","authors":"V. Bhaskaran, B. Natarajan, K. Konstantinides","doi":"10.1109/DCC.1993.253133","DOIUrl":"https://doi.org/10.1109/DCC.1993.253133","url":null,"abstract":"The authors explore compression using an optimal algorithm for the approximation of waveforms with piecewise linear functions. They describe a modification of the algorithm that is provably good, but simple enough for the associated hardware implementation to be presentable.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121731466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compression of DNA sequences","authors":"S. Grumbach, F. Tahi","doi":"10.1109/DCC.1993.253115","DOIUrl":"https://doi.org/10.1109/DCC.1993.253115","url":null,"abstract":"The authors propose a lossless algorithm based on regularities, such as the presence of palindromes, in the DNA. The results obtained, although not satisfactory, are far beyond classical algorithms.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115348840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A MPEG encoder implementation on the Princeton Engine video supercomputer","authors":"H. Taylor, D. Chin, A. Jessup","doi":"10.1109/DCC.1993.253107","DOIUrl":"https://doi.org/10.1109/DCC.1993.253107","url":null,"abstract":"The emergence of world wide standards for video compression has created a demand for design tools and simulation resources to support algorithm research and new product development. Because of the need for subjective study in the design of video compression algorithms it is essential that flexible yet computationally efficient tools be developed. The authors describe implementation of a programmable MPEG encoder on a massively parallel real-time image processing system. The system provides control over program attributes such as the size of the motion search window, buffer management and bit rate. Support is provided for real-time image acquisition and preprocessing from both analog and digital video sources (D1/D2).<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124883707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Minimizing error and VLSI complexity in the multiplication free approximation of arithmetic coding","authors":"G. Feygin, P. Gulak, P. Chow","doi":"10.1109/DCC.1993.253138","DOIUrl":"https://doi.org/10.1109/DCC.1993.253138","url":null,"abstract":"Two new algorithms for performing arithmetic coding without multiplication are presented. The first algorithm, suitable for an alphabet of arbitrary size, reduces the worst-case normalized excess length to under 0.8% versus 1.911% for the previously known best method of Chevion et al. The second algorithm, suitable only for alphabets of less than twelve symbols, allows even greater reduction in the excess code length. For the important binary alphabet the worst-case excess code length is reduced to less than 0.1% versus 1.1% for the method of Chevion et al. The implementation requirements of the proposed new algorithms are discussed and shown to be similar.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127322409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint codebook design for summation product-code vector quantizers","authors":"W. Chan, A. Gersho, S. Soong","doi":"10.1109/DCC.1993.253146","DOIUrl":"https://doi.org/10.1109/DCC.1993.253146","url":null,"abstract":"With respect to the generalized product code (GPC) model for structured vector quantization, multistage VQ (MSVQ) and tree-structured VQ are members of a family of summation product codes (SPCs), defined by the prototypical synthesis function x=f/sub 1/+...+f/sub s/, where f/sub i/, i=1, . . ., s are the residual vector features. The authors describe an algorithm paradigm for the joint design of the feature codebooks constituting a GPC. They specialize the paradigm to a joint design algorithm for the SPCs and exhibit experimental results for the MSVQ of simulated sources. The performance improvements over conventional 'greedy' design are essentially 'free' as the only cost is a moderate increase in design complexity.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115732932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fractal based image compression with affine transformations","authors":"H. Raittinen, K. Kaski","doi":"10.1109/DCC.1993.253125","DOIUrl":"https://doi.org/10.1109/DCC.1993.253125","url":null,"abstract":"As the needs for information transfer and storage increase, data coding and compression become increasingly important in applications such as digital HDTV, telefax, ISDN and image data bases. The authors have developed a fractal image compression method and tested it with binary (black and white) images. The decoded results are similar to the original images. The compression ratios are found to be extremely high.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"360 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115782412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time focal-plane image compression","authors":"R. Tawel","doi":"10.1109/DCC.1993.253109","DOIUrl":"https://doi.org/10.1109/DCC.1993.253109","url":null,"abstract":"A novel analog focal-plane processor, the Vector Array Processor (VAP), is designed specifically for use in real-time/video-rate on-line lossy image compression. This custom CMOS processor is based architecturally on the Vector Quantization algorithm in image coding, The current implementation of the processor can handle codebook sizes of up to 128 vectors of dimensionality 16. The VAP performs vector matching in a fully parallel fashion, utilizing as its basic computational element the 'bump' circuit that computes the similarity between two input voltages and outputs a current proportional to this disparity.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128522700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}