Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)最新文献

筛选
英文 中文
Joint source-channel coding for progressive transmission of embedded source coders 用于嵌入式源编码器累进传输的联合源信道编码
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755654
V. Chande, N. Farvardin
{"title":"Joint source-channel coding for progressive transmission of embedded source coders","authors":"V. Chande, N. Farvardin","doi":"10.1109/DCC.1999.755654","DOIUrl":"https://doi.org/10.1109/DCC.1999.755654","url":null,"abstract":"We present a scheme for joint source-channel coding for transmission of sources compressed by embedded source coders over a memoryless noisy channel. We find an exact solution to the problem of optimal channel code allocation. Then we investigate the properties of the solution which allow us to transmit the source progressively while retaining the optimality at intermediate and final transmission rates, using rate-compatible codes.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128567698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Extending DACLIC for near-lossless compression with postprocessing of greyscale images 扩展DACLIC,通过灰度图像的后处理实现近乎无损的压缩
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785720
Debin Zhao, Y. Chan, Wen Gao
{"title":"Extending DACLIC for near-lossless compression with postprocessing of greyscale images","authors":"Debin Zhao, Y. Chan, Wen Gao","doi":"10.1109/DCC.1999.785720","DOIUrl":"https://doi.org/10.1109/DCC.1999.785720","url":null,"abstract":"Summary form only given. A lossless/near lossless coding scheme, DACLIC, is presented. The proposed scheme attempts to remove redundancy in a given image in the spatial domain. The redundancy removal is achieved by block direction prediction and context-based error modeling. The block direction operation in DACLIC first partitions an image into blocks. Pixels within each incoming block are analyzed resulting in a best directional prediction for that block. The best direction is chosen from a given set that results in the minimum prediction error. Removal of redundancy by the block direction technique is not possible for removing all possible redundancy in a given image. Another decorrelation part of the DACLIC scheme is the context-based error modeling which exploits context-dependent DPCM error structures. The DACLIC scheme is primarily used as a lossless image compression technique. However, the scheme can be easily extended to near-lossless compression applications by introducing a small quantization loss. This small quantization loss is restricted to an absolute error not exceeding a prescribed value n for all pixels in a given image. Application of block direction and context modeling reduces a given image into residuals. This residual typically has a lower entropy than the given image. A quadtree Rice coder (QRC) is proposed as an entropy coder of DACLIC. An arithmetic coder is also given as an option. The QRC operates on residual blocks with low computing complexity that compares favorably with the residual coding method used by LOCO-I as the proposed QRC is two-dimensional in nature. For near-lossless compression with a larger value of n, banding artifacts are visible in the decoded image. In the DACLIC system, a postprocessing technique is proposed to remove the banding artifacts.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128237489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Resynchronizing variable-length codes for robust image transmission 用于鲁棒图像传输的变长码重同步
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785686
S. Hemami, Tader Chang, R. Lau
{"title":"Resynchronizing variable-length codes for robust image transmission","authors":"S. Hemami, Tader Chang, R. Lau","doi":"10.1109/DCC.1999.785686","DOIUrl":"https://doi.org/10.1109/DCC.1999.785686","url":null,"abstract":"Summary form only given. This paper considers instantaneous prefix codes with possibly unequal codeword lengths but specific codewords, such as a Huffman code. Resynchronizing VLC (RVLC) contain one or more synchronizing codewords that resynchronize the decoder regardless of any previous data. Previous applications of optimal resynchronizing VLC have been limited to sources with alphabets of sizes less than 30, while only non-optimal VLC with ad-hoc marker codewords have been applied to image data. This paper adapts a general design algorithm for optimal resynchronizing VLC to JPEG image data, demonstrating its applicability to sources with large alphabets (>1000). In order to ensure that the decoded data is placed properly in the image following resynchronization, the resulting VLC are modified to contain extended synchronizing codewords to serve as markers. Minor modifications to the baseline JPEG coder increase the robustness to errors, and a concealment algorithm locates and repairs errant data. Images coded using the RVLC-JPEG combined with the concealment algorithm are robust to BER as high as 2/spl times/10/sup -4/ and are extremely tolerant of burst errors. The excellent tolerance to both bit and burst errors at high bit rates demonstrates that images coded with such RVLC can be transmitted over imperfect channels suffering bit errors or packet losses without channel coding for the image data. At lower bit rates, while the overhead is non-trivial, the encoded bitstream does not have firm restrictions on numbers or spacings of errors and hence provides more graceful degradation than traditional ECC.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128291909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image compression based on low-pass wavelet transform and multi-scale edge compensation. Part II: evidence and experiments 基于低通小波变换和多尺度边缘补偿的图像压缩。第二部分:证据和实验
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785716
Xiaohui Xue
{"title":"Image compression based on low-pass wavelet transform and multi-scale edge compensation. Part II: evidence and experiments","authors":"Xiaohui Xue","doi":"10.1109/DCC.1999.785716","DOIUrl":"https://doi.org/10.1109/DCC.1999.785716","url":null,"abstract":"Summary form only given. The scalability and recognition ability of the edge detection of MSEC can be observed, in which two kinds of edges are handled separately, and, for each kind of edge, the detector responds to the exact scale only. For example, the apparent large-scale edges in the original image are kept undetected until an appropriate decomposition level is reached. Compensated images are considerably smoothed since fine details are systematically removed. Such smoothing is seriously different from the traditional smoothing method in both the idea inside and the effect outside. The low-pass wavelet transform works since the compensated image contains much less high-frequency energy. By the low-pass wavelet transform, the system is able to reach and process edges at the next larger scale. So, the encoder outputs multi-scale primal sketches, which are coded in a modeled way instead of pixel by pixel. The encoder also outputs the final smooth background, which can be readily coded by traditional methods. The decoder synthesizes the image according to the received information on the multi-scale primal sketch and the background.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129937030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multiple description lattice vector quantization 多重描述点阵矢量量化
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755649
S. Servetto, V. Vaishampayan, N. Sloane
{"title":"Multiple description lattice vector quantization","authors":"S. Servetto, V. Vaishampayan, N. Sloane","doi":"10.1109/DCC.1999.755649","DOIUrl":"https://doi.org/10.1109/DCC.1999.755649","url":null,"abstract":"We consider the problem of designing a lattice-based multiple description vector quantizer for a two-channel diversity system. The design of such a quantizer can be reduced to the problem of assigning pair labels to points of a vector quantizer codebook. A general labeling procedure based on the structure of the lattice is presented, along with detailed results for the hexagonal lattice: algorithms, asymptotic performance, and numerical simulations. Asymptotically, when compared with the lattice Z, the resulting quantizer achieves the standard second-moment gain of the hexagonal lattice for the central distortion, and, surprisingly, achieves the two-dimensional sphere gain for the side distortion.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130790590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
Generalized multiple description vector quantization 广义多重描述矢量量化
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755648
M. Fleming, M. Effros
{"title":"Generalized multiple description vector quantization","authors":"M. Fleming, M. Effros","doi":"10.1109/DCC.1999.755648","DOIUrl":"https://doi.org/10.1109/DCC.1999.755648","url":null,"abstract":"Packet-based data communication systems suffer from packet loss under high network traffic conditions. As a result, the receiver is often left with an incomplete description of the requested data. Multiple description source coding addresses the problem of minimizing the expected distortion caused by packet loss. An equivalent problem is that of source coding for data transmission over multiple channels where each channel has some probability of breaking down. Recent work in practical multiple description coding explores the design of multiple description scalar and vector quantizers for the case of two channels or packets. This paper presents a new practical algorithm, based on a ternary tree structure, for the design of both fixed- and variable-rate multiple description vector quantizers for an arbitrary number of channels. Experimental results achieved by codes designed with this algorithm show that they perform well under a wide range of packet loss scenarios.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130503525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Encoding time reduction in fractal image compression 分形图像压缩中编码时间的减少
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785706
I. Salih, S. H. Smith
{"title":"Encoding time reduction in fractal image compression","authors":"I. Salih, S. H. Smith","doi":"10.1109/DCC.1999.785706","DOIUrl":"https://doi.org/10.1109/DCC.1999.785706","url":null,"abstract":"Summary form only given. The mathematical interpretation of fractal image compression is strongly related to Banach's fixed point theorem. More precisely, if (X,d) represents a metric space of digital images where d is a given suitable metric, we want to think of an element of X that we wish to encode as a fixed point of some operator. Since we are dealing with coding images, the choice of the metric space X as well as the metric d have an enormous effect on the complexity of the code. The coding of an image f consists of finding an iterated function system (IFS), a contractive mapping W whose fixed point f is the best approximation of f. The collage theorem states that by minimizing the distance between the fixed point f and Wf, it is expected that the distance between the fixed point f and the image f will be minimized. We present a method of mapping similar regions within an image by an approximation of the collage error; this will result in writing range blocks as a linear combination of domain blocks. We also address the complexity of the encoder, by proposing a new classification scheme based on the domain and range blocks moments which will reduce the encoding time by a factor of hundreds with insubstantial loss in the image quality. Extensive simulation results confirm our claims.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114228579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improved joint source-channel decoding for variable-length encoded data using soft decisions and MMSE estimation 利用软决策和MMSE估计改进了变长度编码数据的联合信源信道解码
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785701
Moonseo Park, David J. Miller
{"title":"Improved joint source-channel decoding for variable-length encoded data using soft decisions and MMSE estimation","authors":"Moonseo Park, David J. Miller","doi":"10.1109/DCC.1999.785701","DOIUrl":"https://doi.org/10.1109/DCC.1999.785701","url":null,"abstract":"Summary form only given. We develop improved joint source-channel (JSC) methods for decoding variable length encoded data based on residual source redundancy. Until very recently, all JSC methods based on residual redundancy assumed fixed length codewords. Recently, a practically realizable system which performed best over a significant range of channel conditions consisting of inner binary convolutional (BC) (bit-level) decoding, followed by outer (symbol-level) approximate maximum a posteriori (MAP) JSC decoding was suggested. Here we suggest two ways of improving on this method. First, a straightforward improvement is realized by using soft/probabilistic bit decisions output by the BC decoder, rather than hard decisions. Second, the JSC decoder can itself generate soft/probabilistic output, at the symbol level. The exact VLC minimum mean-squared error (MMSE) decoder has large complexity, similar to the exact MAP method, because the number of states increases with time. Thus we suggest an approximate MMSE method. In this approximate scheme, we first form a reduced directed graph, using the same MAP state reduction procedure as used for approximate MAP JSC decoding. Next, we rearrange the remaining states to form an (equivalent) directed graph. We then apply the forward/backward algorithm and a state merging procedure to this reduced graph to get approximate a posteriori probabilities, used for MMSE estimation.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115308491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
"Bit rate on demand" using pruned tree-structured hierarchical lookup vector quantization “按需比特率”使用修剪树结构分层查找矢量量化
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755652
Kunal Mukherjee, A. Mukherjee, T. Acharya
{"title":"\"Bit rate on demand\" using pruned tree-structured hierarchical lookup vector quantization","authors":"Kunal Mukherjee, A. Mukherjee, T. Acharya","doi":"10.1109/DCC.1999.755652","DOIUrl":"https://doi.org/10.1109/DCC.1999.755652","url":null,"abstract":"We propose a real-time image coding system, which is capable of adapting instantaneously to the available channel bandwidth. The range of operational bandwidths for this system has a finer calibration than ordinary hierarchical vector quantization (HVQ) or wavelet based hierarchical vector quantization (WHVQ) methods suggested in the literature. These properties make it very attractive for networks with fluctuating available bandwidth, like the Internet. All encoder and decoder operations are strictly constant time per pixel, proceeding through table lookups, and are intrinsically suitable for parallel and hardware implementation.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125120216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image compression based on low-pass wavelet transform and multi-scale edge compensation. Part II: MSEC model 基于低通小波变换和多尺度边缘补偿的图像压缩。第二部分:MSEC模型
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096) Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785715
Xiaohui Xue
{"title":"Image compression based on low-pass wavelet transform and multi-scale edge compensation. Part II: MSEC model","authors":"Xiaohui Xue","doi":"10.1109/DCC.1999.785715","DOIUrl":"https://doi.org/10.1109/DCC.1999.785715","url":null,"abstract":"Summary form only given. This paper presents the idea of multi-scale edge compensation, and puts forward an image compression method (MSEC) based on the low-pass wavelet transform and multi-scale edge compensation. The encoder performs edge detection, edge compensation at every scale from fine to coarse, outputs the model information and the final background. The decoder synthesizes the image according to the recorded information of the multi-scale edge model and the background. Experimental results are considerably encouraging. For 512/spl times/512/spl times/24 bits Lena, when compressed by 159 times, the PSNR values for Y, U and V components are 28.2 dB, 34.6 dB and 34.5 dB respectively. For a large class of images, compression as high as about 500 times is achieved, and the image quality remains acceptable. As a matter of fact, the performance of current MSEC system can be greatly improved in the future since the MSEC technique involves many aspects of image processing including both image analysis and realistic image generation. The theory of the MSEC model consists of two components. One is the model and processing methods for the edges of MSEC. Scalability and recognition ability of the edge detection are essential to MSEC. MSEC recognizes and processes two different kinds of edges: roof edge and step edge. The scalability of an edge is associated with the algebraic precision of the low-pass wavelet. The compensation models of edge profile and edge shape are also new concepts of the MSEC. The other is the low-pass wavelet transform of the MSEC, which studies the properties of the low-pass wavelet transform in detail and explains why we use the low-pass wavelet as our muti-scale transform tool. The concept of algebraic precision of the low-pass wavelet transform is crucial. The frequency response of the low-pass wavelet is also inspected.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121753980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书