Proceedings DCC '95 Data Compression Conference最新文献

筛选
英文 中文
Video coding using 3 dimensional DCT and dynamic code selection 视频编码采用三维DCT和动态码选择
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515561
M. Bauer, K. Sayood
{"title":"Video coding using 3 dimensional DCT and dynamic code selection","authors":"M. Bauer, K. Sayood","doi":"10.1109/DCC.1995.515561","DOIUrl":"https://doi.org/10.1109/DCC.1995.515561","url":null,"abstract":"Summary only given. We address the quality issue, and present a method for improved coding of the 3D DCT coefficients. Performance gain is achieved through the use of dynamically selected multiple coding algorithms. The resulting performance is excellent giving a compression ratio of greater than to 100:1 for image reproduction. The process consists of stacking 8 frames and breaking the data into 8/spl times/8/spl times/8 pixel cubes. The three dimensional DCT is applied to each cube. Each cube is then scanned in each dimension to determine if significant energy exists beyond the first two coefficients. Significance is determined with separate thresholds for each dimension. A single bit of side information is transmitted for each dimension of each cube to indicate whether more than two coefficients will be transmitted. The remaining coefficients of all cubes are reordered into a linear array such that the elements with the highest expected energies appear first and lower expected energies appear last. This tends to group coefficients with similar statistical properties for the most efficient coding. Eight different encoding methods are used to convert the coefficients into bits for transmission. The Viterbi algorithm is used to select the best coding method. The cost function is the number of bits that need to be sent. Each of the eight coding methods is optimized for a different range of values.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"5 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115675198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Algorithm evaluation for the synchronous data compression standards 同步数据压缩标准的算法评价
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515596
M. Maier
{"title":"Algorithm evaluation for the synchronous data compression standards","authors":"M. Maier","doi":"10.1109/DCC.1995.515596","DOIUrl":"https://doi.org/10.1109/DCC.1995.515596","url":null,"abstract":"In association with an industry standardization effort, we have developed an evaluation procedure for compression algorithms for communication networks. The Synchronous Data Compression Consortium is a group of data transmission equipment makers who are promoting an interoperable standard for link layer compression. The target market is synchronous interconnection of routers and bridges for intemetworking over the public digital transmission network. Compression is desirable for such links to better match their speed to that of the interconnected local area networks. But achievable performance is effected by interaction of algorithm, the networking protocols, and implementation details. The compression environment is different from traditional file compression in inducing a tradeoff between compression ratio, compression time, and the performance metric (network throughput). In addition, other parameters and behaviors -=e introduced, including robustness to data retransmission and multiple interleaved streams. Specifically, we have evaluated the following issues through both synchronous queuing and direct network simulation:","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"346 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124272523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction of fixed pattern background and restoration of JPEG compressed CCD images 固定图案背景的校正和JPEG压缩CCD图像的恢复
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515534
M. Datcu, G. Schwarz, K. Schmidt, C. Reck
{"title":"Correction of fixed pattern background and restoration of JPEG compressed CCD images","authors":"M. Datcu, G. Schwarz, K. Schmidt, C. Reck","doi":"10.1109/DCC.1995.515534","DOIUrl":"https://doi.org/10.1109/DCC.1995.515534","url":null,"abstract":"Summary form only given; substantially as follows. The present paper addresses the problem of the removal of the sensor background patterns, dark current and responsivity, from CCD images, when the uncorrected image was transmitted through a JPEG like block transform coding system. The work is of particular interest for imaging systems which operate under severe hardware restrictions, and require high accuracy, e.g. deep space cameras. The complexity of the problem comes from the aliasing of the image signal and CCD background patterns during the quantization in the transformed domain. The authors investigated several solutions and selected the optimal one based on three objectives: the radiometric accuracy, the visual quality, and the computational complexity. The solution selected for the background pattern removal and image restoration uses a combination of different methods: correction in space domain and iterative regularization in both space and DCT domain.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116985894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple-dictionary compression using partial matching 使用部分匹配的多字典压缩
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515517
Dzung T. Hoang, Philip M. Long, J. Vitter
{"title":"Multiple-dictionary compression using partial matching","authors":"Dzung T. Hoang, Philip M. Long, J. Vitter","doi":"10.1109/DCC.1995.515517","DOIUrl":"https://doi.org/10.1109/DCC.1995.515517","url":null,"abstract":"Motivated by the desire to find text compressors that compress better than existing dictionary methods, but run faster than PPM implementations, we describe methods for text compression using multiple dictionaries, one for each context of preceding characters, where the contexts have varying lengths. The context to be used is determined using an escape mechanism similar to that of PPM methods. We describe modifications of three popular dictionary coders along these lines and experiments evaluating their efficacy using the text files in the Calgary corpus. Our results suggest that modifying LZ77 along these lines yields an improvement in compression of about 4%, that modifying LZFG yields a compression improvement of about 8%, and that modifying LZW in this manner yields an average improvement on the order of 12%.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124816552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Coding gain of intra/inter-frame subband systems 帧内/帧间子带系统的编码增益
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515559
G. Galvagno, G. Mian, R. Rinaldo
{"title":"Coding gain of intra/inter-frame subband systems","authors":"G. Galvagno, G. Mian, R. Rinaldo","doi":"10.1109/DCC.1995.515559","DOIUrl":"https://doi.org/10.1109/DCC.1995.515559","url":null,"abstract":"Summary form only given. Typical image sequence coders use motion compensation techniques in connection to coding of the motion compensated difference images (interframe coding). Moreover, the difference loop is initialized from time to time by intraframe coding of images. It is therefore important to have a procedure that allows to evaluate the performance of a particular coding scheme: coding gain and rate-distortion figures are used in this work to this purpose. We present an explicit procedure to compute the coding gain for two-dimensional separable subband systems, both in the case of a uniform and a pyramid subband decomposition, and for the case of interframe coding. The technique operates in the signal domain and requires the knowledge of the autocorrelation function of the input process. In the case of a separable subband system and image spectrum, the coding gain can be computed by combining the results relative to appropriately defined one-dimensional filtering schemes, thus making the technique very attractive in terms of computational complexity. We consider both the case of a uniform subband decomposition and of a pyramid decomposition. The developed procedure is applied to compute the subband coding gain for motion compensated signals in the case of images modeled as separable Markov processes: different filter banks are compared to each other and to transform coding. In order to have indications on the effectiveness of motion compensation, we also compute the coding gain for intraframe images. We show that the results for the image models are in very good agreement with those obtained with real-world data.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125588579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FFT based fast architecture & algorithm for discrete wavelet transforms 基于FFT的离散小波变换快速结构与算法
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515550
A. Sri-Krishna, C. Chu, M. Bayoumi
{"title":"FFT based fast architecture & algorithm for discrete wavelet transforms","authors":"A. Sri-Krishna, C. Chu, M. Bayoumi","doi":"10.1109/DCC.1995.515550","DOIUrl":"https://doi.org/10.1109/DCC.1995.515550","url":null,"abstract":"Summary form only given. A non-recursive (unlike classical dyadic decomposition) and fast Fourier transform based architecture for computing discrete wavelet transforms (DWT) of a one dimensional sequence is presented. The DWT coefficients at all resolutions can be generated simultaneously without waiting for generation of coefficients at a lower octave level. This architecture is faster than architectures proposed so far for DWT decomposition (which are implementations based on recursion) and can be fully pipelined. The complexity of the control circuits for this architecture is much lower as compared to implementation of recursive methods. Consider the computation of the DWT (four octaves) of a sequence. Recursive dyadic decomposition can be converted to a non-recursive method as shown. We can move all the decimators shown to the extreme right (towards output end) and have a single filter and a single decimator in each path. We note that a decimator (of factor k) when so moved across a filter of length L will increase the length of the filter by a factor of k. Thus we will get first octave DWT coefficients by convolving input sequence with a filter of length L and decimating the output by a factor of 2.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129603990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating fractal image compression by multi-dimensional nearest neighbor search 多维最近邻搜索加速分形图像压缩
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515512
D. Saupe
{"title":"Accelerating fractal image compression by multi-dimensional nearest neighbor search","authors":"D. Saupe","doi":"10.1109/DCC.1995.515512","DOIUrl":"https://doi.org/10.1109/DCC.1995.515512","url":null,"abstract":"In fractal image compression the encoding step is computationally expensive. A large number of sequential searches through a list of domains (portions of the image) are carried out while trying to find the best match for another image portion. Our theory developed here shows that this basic procedure of fractal image compression is equivalent to multi-dimensional nearest neighbor search. This result is useful for accelerating the encoding procedure in fractal image compression. The traditional sequential search takes linear time whereas the nearest neighbor search can be organized to require only logarithmic time. The fast search has been integrated into an existing state-of-the-art classification method thereby accelerating the searches carried out in the individual domain classes. In this case we record acceleration factors from 1.3 up to 11.5 depending on image and domain pool size with negligible or minor degradation in both image quality and compression ratio. Furthermore, as compared to plain classification our method is demonstrated to be able to search through larger portions of the domain pool without increased the computation time.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129255106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 132
Wireless video coding system demonstration 无线视频编码系统演示
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515558
J. Villasenor, R. Jain, B. Belzer, W. Boring, C. Chien, C. Jones, J. Liao, S. Molloy, S. Nazareth, B. Schoner, J. Short
{"title":"Wireless video coding system demonstration","authors":"J. Villasenor, R. Jain, B. Belzer, W. Boring, C. Chien, C. Jones, J. Liao, S. Molloy, S. Nazareth, B. Schoner, J. Short","doi":"10.1109/DCC.1995.515558","DOIUrl":"https://doi.org/10.1109/DCC.1995.515558","url":null,"abstract":"Summary form only given. We have developed and present here a prototype point-to-point wireless video system that has been implemented using a combination of commercial components and custom hardware. The coding algorithm being used consists of subband decomposition using low-complexity, integer-coefficient filters, scalar quantization, and run-length and entropy coding. The prototype system consists of the following major components: spread spectrum radio with interface card and driver, compression board, and an NEC laptop and docking station which provide the PC bus slots and control. The compression algorithms are implemented on a board with a single 10000-gate FPGA. Prior to implementing the algorithms in hardware, a study was performed to resolve issues of word length and scaling, and to select quantization and run length parameters. It was determined that 16-bit precision in the wavelet transform stage is sufficient to prevent under-low and overflow provided that rescaling of data is correctly performed. After processing by the FPGA, the compressed video is transferred to the PC for transmission over the radio. A commercial serial card (PI Card) provides a synchronous serial interface to the radio. The serial controller chip used by this card supports several serial protocols and thus the effect of the these protocols on the data in a wireless environment can be tested.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127970054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling word occurrences for the compression of concordances 为索引的压缩建模单词出现
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515572
A. Bookstein, S. T. Klein, T. Raita
{"title":"Modeling word occurrences for the compression of concordances","authors":"A. Bookstein, S. T. Klein, T. Raita","doi":"10.1109/DCC.1995.515572","DOIUrl":"https://doi.org/10.1109/DCC.1995.515572","url":null,"abstract":"Summary form only given. Effective compression of a text-based information retrieval system involves compression not only the text itself, but also of the concordance by which one accesses that text and which occupies an amount of storage comparable to the text itself. The concordance can be a rather complicated data structure, especially if it permits hierarchical access to the database. But one or more components of the hierarchy can usually be conceptualized as a bit-map. We conceptualize our bit-map as being generated as follows. At any bit-map site we are in one of two states: a cluster state (C), or a between-cluster state (B). In a given state, we generate a bit-map-value of zero or one and, governed by the transition probabilities of the model, enter a new state as we move to the next bit-map site. Such a model has been referred to as a hidden Markov model in the literature. Unfortunately, this model is analytically difficult to use. To approximate it, we introduce several traditional Markov models with four states each, B and C as above, and two transitional states. We present the models, show how they are connected, and state the formal compression algorithm based on these models. We also include some experimental results.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134025516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
The Implementation of Data Compression in the Cassini RPWS Dedicated Compression Processor 卡西尼RPWS专用压缩处理器中数据压缩的实现
Proceedings DCC '95 Data Compression Conference Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515601
I. Willis, L. Woolliscroft, T. Averkamp, D. Gurnett, R. Johnson, D. Kirchner, W. Kurth, W. Robison
{"title":"The Implementation of Data Compression in the Cassini RPWS Dedicated Compression Processor","authors":"I. Willis, L. Woolliscroft, T. Averkamp, D. Gurnett, R. Johnson, D. Kirchner, W. Kurth, W. Robison","doi":"10.1109/DCC.1995.515601","DOIUrl":"https://doi.org/10.1109/DCC.1995.515601","url":null,"abstract":"The Radio and Plasma Wave Science instrument is a part of the scientific payload of the NASA/ESA Cassini mission that is due to be launched to study the planet Saturn in 1997. Such instruments are capable of generating vastly more data than the data systems of the spacecraft and the link to the Earth can handle and so data selection and data compression are important. Within RF'WS some data compression is performed in a dedicated compression processor, DCP. This processor is based on an HS-80C85 processor and includes several algorithms which have been tested for their efficacy in the compression of plasma wave data. Criteria have been derived for the acceptable data distortion that will not adversely affect the scientific value of the data. The main algorithms that are installed in the DCP are the Rice algorithm and a Walsh transform. These are complemented with simple bit stripping and packing algorithms. The hardware of the DCP is described. A discussion of the software structure is given together with performance statistics on the software as implemented in the engineering model, of RF'WS. The software structure in the DCP makes it a suitable host for further scientific software. One such is an algorithm to detect dust impacts and this wdl also be installed in the engineering model.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131195094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信