2013 Data Compression Conference最新文献

筛选
英文 中文
Context Lossless Coding of Audio Signals 音频信号无损编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.102
G. Ulacha, R. Stasinski
{"title":"Context Lossless Coding of Audio Signals","authors":"G. Ulacha, R. Stasinski","doi":"10.1109/DCC.2013.102","DOIUrl":"https://doi.org/10.1109/DCC.2013.102","url":null,"abstract":"In the paper improvements obtained for context lossless audio coding are investigated. The approach is not popular in audio compression, hence, the research concentrates on static forward predictors optimized using MMSE criterion. Two and three context algorithms are tested on 16 popular benchmark recordings. Savings due to inter-channel audio dependencies are also considered. It is shown that indeed, context approach has potential of improving data compaction properties of audio coding algorithms.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130601850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subsampling Input Based Side Information Creation in Wyner-Ziv Video Coding Wyner-Ziv视频编码中基于子采样输入的边信息创建
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.98
Y. Shen, Ji-Ciao Luo, Ja-Ling Wu
{"title":"Subsampling Input Based Side Information Creation in Wyner-Ziv Video Coding","authors":"Y. Shen, Ji-Ciao Luo, Ja-Ling Wu","doi":"10.1109/DCC.2013.98","DOIUrl":"https://doi.org/10.1109/DCC.2013.98","url":null,"abstract":"Summary form only given. Distributed video coding (DVC) has been intensively studied in recent years. This new coding paradigm substantially differs from conventional prediction-based video codecs such as MPEG and H.26x, which are characterized by a complex encoder and simple decoder. The conventional DVC codec, e.g., DISCOVER codec, uses advanced frame interpolation techniques to create SI based on adjacent decoded reference frames. The quality of SI is a well-recognized factor in the RD performance of WZ video coding. A high SI quality implies a high correlation between the created SI and the original WZ frame, which then decreases the rate required to achieve a given decoded quality. Clearly, the performance of an SI creation process based on adjacent previously decoded frames is limited by the quality of the past and the future reference frames as well as the distance and motion behavior between them. The correlation between high-motion frames is low and vice versa. That is, SI quality in the conventional codecs depends on the temporal correlation of key frames, which affects the bitrate and PSNR of the compression process. In this work, a novel DVC architecture for dealing with the cases of high-motion and large GOP-size sequences is proposed to better the rate-distortion (RD) performance. For high-motion video sequences, the proposed architecture generates SI by using subsampled spatial information instead of interpolated temporal information. the proposed approach separates the video sequence into subsampled key frames and corresponding WZ frames, which changes the creation of SI. That is, all successive frames on the encoder side are downsized to sub-frames, which are then compressed by an H.264/AVC intra encoder. Experimental results reveal that the subsampling input based DVC codec can gain up to 1.47 dB in the RD measures and maintains the most important characteristic of the DVC codec, the encoder is lightweight, as compared with the conventional WZ codec, respectively. The novel DVC architecture evaluated in this study exploits spatial relations to create SI. The experimental results confirm that the RD performance of the proposed approach is superior to that of the conventional one for high-motion and/or large GOP-size sequences. The quality of spatial interpolation based SI is higher than that of the temporal interpolation one, which leads to a high-PSNR reconstructed WZ frame. The subsampled key frames are also decoded by LDPCA decoder to recover the information lost when H.264/AVC intra coding is used to increase PSNR gain. Since many spatial domain interpolation and super resolution schemes have been proposed for use in the fields of image processing and computer vision, the performance of the proposed DVC codec can be further enhanced by using better schemes to generate even better SI.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131955642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Online Learning Based Face Distortion Recovery for Conversational Video Coding 会话视频编码中基于在线学习的人脸失真恢复
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.105
Xi Wang, Li Su, Qingming Huang, Guorong Li, H. Qi
{"title":"Online Learning Based Face Distortion Recovery for Conversational Video Coding","authors":"Xi Wang, Li Su, Qingming Huang, Guorong Li, H. Qi","doi":"10.1109/DCC.2013.105","DOIUrl":"https://doi.org/10.1109/DCC.2013.105","url":null,"abstract":"In a video conversation, the participants usually remain the same. As the conversation continues, similar facial expressions of the same person would occur intermittently. However, the correlation of similar face features has not been fully used since the conventional methods only focus on independent frames. We set up a face feature database and updated it online to include new facial expressions during the whole conversation. At the receiver side, the database is used to recover the face distortion and thus improve the visual quality. Additionally, the proposed method brings small burden to update the database and is generic to various CODEC.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"34 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134412448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Low Complexity Rate Distortion Optimization for HEVC HEVC低复杂度率失真优化
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.15
Siwei Ma, Shiqi Wang, Shanshe Wang, Liang Zhao, Qin Yu, Wen Gao
{"title":"Low Complexity Rate Distortion Optimization for HEVC","authors":"Siwei Ma, Shiqi Wang, Shanshe Wang, Liang Zhao, Qin Yu, Wen Gao","doi":"10.1109/DCC.2013.15","DOIUrl":"https://doi.org/10.1109/DCC.2013.15","url":null,"abstract":"The emerging High Efficiency Video Coding (HEVC) standard has improved the coding efficiency drastically, and can provide equivalent subjective quality with more than 50% bit rate reduction compared to its predecessor H.264/AVC. As expected, the improvement on coding efficiency is obtained at the expense of more intensive computation complexity. In this paper, based on an overall analysis of computation complexity in HEVC encoder, a low complexity rate distortion optimization (RDO) coding scheme is proposed by reducing the number of available candidates for evaluation in terms of the intra prediction mode decision, reference frame selection and CU splitting. With the proposed scheme, the RDO technique of HEVC can be implemented in a low-complexity way for complexity-constrained encoders. Experimental results demonstrate that, compared with the original HEVC reference encoder implementation, the proposed algorithms can achieve about 30% reduced encoding time on average with ignorable coding performance degradation (0.8%).","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128237686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Cross Segment Decoding for Improved Quality of Experience for Video Applications 提高视频应用体验质量的交叉段解码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.31
Jiangtao Wen, Shunyao Li, Yao Lu, Meiyuan Fang, Xuan Dong, Huiwen Chang, Pin Tao
{"title":"Cross Segment Decoding for Improved Quality of Experience for Video Applications","authors":"Jiangtao Wen, Shunyao Li, Yao Lu, Meiyuan Fang, Xuan Dong, Huiwen Chang, Pin Tao","doi":"10.1109/DCC.2013.31","DOIUrl":"https://doi.org/10.1109/DCC.2013.31","url":null,"abstract":"In this paper, we present an improved algorithm for decoding live streamed or pre-encoded video bit streams with time-varying qualities. The algorithm extracts information available to the decoder from a high visual quality segment of the clip that has already been received and decoded, but was encoded independently from the current segment. The proposed decoder is capable of significantly improve the Quality of Experience of the user without incurring significant overhead to the storage and computational complexities of both the encoder and the decoder. We present simulation results using the HEVC reference encoder and standard test clips, and discuss areas of improvements to the algorithm and potential ways of incorporating the technique to a video streaming system or standards.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134378809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Single-Pass Dependent Bit Allocation in Temporal Scalability Video Coding 时间可扩展性视频编码中的单通道相关位分配
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.87
Jiaying Liu, Yongjin Cho, Zongming Guo
{"title":"Single-Pass Dependent Bit Allocation in Temporal Scalability Video Coding","authors":"Jiaying Liu, Yongjin Cho, Zongming Guo","doi":"10.1109/DCC.2013.87","DOIUrl":"https://doi.org/10.1109/DCC.2013.87","url":null,"abstract":"Summary form only given. In the scalable video coding, we refer to a group-of-pictures (GOP) structure that is composed of hierarchically aligned B-pictures. It employs generalized B-pictures that can be used as a reference to following inter-coded frames. Although it introduces a structural encoding delay of one GOP size, it provides much higher coding efficiency than the conventional GOP structures [2]. Moreover, due to its natural capability of providing the temporal scalability, it is employed as a GOP structure of H.264/SVC [3]. Because of the complex inter-layer dependence of hierarchical B-pictures, the development of an efficient and effective bit allocation algorithm for H.264/SVC is a challenging task. There are several bit allocation algorithms that considered the inter-layer dependence in the literature before. Schwarz et al. proposed the QP cascading scheme that applies a fixed quantization parameter (QP) difference between adjacent temporal layers. Liu et al. introduced constant weights to temporal layers in their H.264/SVC rate control algorithm. Although these algorithms achieve superior coding efficiency, they are limited in two aspects. First, the inter-layer dependence is heuristically addressed. Second, the input video characteristics are not taken into account. For these reasons, the optimality of these bit allocation algorithms cannot be guaranteed. We propose a single-pass dependent bit allocation algorithm for scalable video coding with hierarchical B-pictures in this work. It is generally perceived that dependent bit allocation algorithms cannot be practically employed due to their extremely high complexity requirement. To develop a practical single-pass bit allocation algorithm, we use the number of skipped blocks and the ratio of the mean absolute difference (MAD) as features to measure the inter-layer signal dependence of input video signals. The proposed algorithm performs bit allocation at the target bit rate with two mechanisms: 1) the GOP based rate control and 2) adaptive temporal layer QP decision. The superior performance of the proposed algorithm is demonstrated by experimental results, which is benchmarked by two other single-pass bit allocation algorithms in the literature. The rate and the PSNR coding performance of the proposed scheme and two benchmarks at various target bit rates for GOP-4 and GOP-8, respectively. We see that the proposed rate control algorithm achieves about 0.2-0.3dB improvement in coding efficiency as compared to JSVM. Furthermore, the proposed rate control algorithm outperforms Liu's Algorithm by a significant margin.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115591782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Coding Tree Depth Estimation for Complexity Reduction of HEVC 降低HEVC复杂度的编码树深度估计
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.12
G. Corrêa, P. Assunção, L. Agostini, L. Cruz
{"title":"Coding Tree Depth Estimation for Complexity Reduction of HEVC","authors":"G. Corrêa, P. Assunção, L. Agostini, L. Cruz","doi":"10.1109/DCC.2013.12","DOIUrl":"https://doi.org/10.1109/DCC.2013.12","url":null,"abstract":"The emerging HEVC standard introduces a number of tools which increase compression efficiency in comparison to its predecessors at the cost of greater computational complexity. This paper proposes a complexity control method for HEVC encoders based on dynamic adjustment of the newly proposed coding tree structures. The method improves a previous solution by adopting a strategy that takes into consideration both spatial and temporal correlation in order to decide the maximum coding tree depth allowed for each coding tree block. Complexity control capability is increased in comparison to a previous work, while compression losses are decreased by 70%. Experimental results show that the encoder computational complexity can be downscaled to 60% with an average bit rate increase around 1.3% and a PSNR decrease under 0.07 dB.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116038730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Texture Compression 纹理压缩
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.30
Georgios Georgiadis, A. Chiuso, Stefano Soatto
{"title":"Texture Compression","authors":"Georgios Georgiadis, A. Chiuso, Stefano Soatto","doi":"10.1109/DCC.2013.30","DOIUrl":"https://doi.org/10.1109/DCC.2013.30","url":null,"abstract":"We characterize ``visual textures'' as realizations of a stationary, ergodic, Markovian process, and propose using its approximate minimal sufficient statistics for compressing texture images. We propose inference algorithms for estimating the ``state'' of such process and its ``variability''. These represent the encoding stage. We also propose a non-parametric sampling scheme for decoding, by synthesizing textures from their encoding. While these are not faithful reproductions of the original textures (so they would fail a comparison test based on PSNR), they capture the statistical properties of the underlying process, as we demonstrate empirically. We also quantify the tradeoff between fidelity (measured by a proxy of a perceptual score) and complexity.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123944816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Image Compression via Colorization Using Semi-Regular Color Samples 使用半规则颜色样本通过着色图像压缩
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.112
Chang-jiang Zhang, Hui Fang
{"title":"Image Compression via Colorization Using Semi-Regular Color Samples","authors":"Chang-jiang Zhang, Hui Fang","doi":"10.1109/DCC.2013.112","DOIUrl":"https://doi.org/10.1109/DCC.2013.112","url":null,"abstract":"Summary form only given. We improves colorization-based image compression by sparsely sampling color points on a semi-regular grid and compressing them using JPEG. We generate variations of sampling locations based on extreme gray-scale values to to further improve PSNR.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124473510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
From Run Length Encoding to LZ78 and Back Again 从运行长度编码到LZ78再返回
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.22
Yuya Tamakoshi, T. I., Shunsuke Inenaga, H. Bannai, M. Takeda
{"title":"From Run Length Encoding to LZ78 and Back Again","authors":"Yuya Tamakoshi, T. I., Shunsuke Inenaga, H. Bannai, M. Takeda","doi":"10.1109/DCC.2013.22","DOIUrl":"https://doi.org/10.1109/DCC.2013.22","url":null,"abstract":"In this paper, we present efficient algorithms for interconversion between Lempel-Ziv 78 (LZ78) encoding and run length encoding (RLE). We show how, given an RLE of size n for a string S, we can compute the corresponding LZ78 encoding of size m for S in O((n + m) log σ) time, where σ is the number of distinct characters appearing in S. We also show how, given an LZ78 encoding of size m for a string S, we can compute the corresponding RLE of size n in O(n + m) time. Both algorithms use O(m) extra working space.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122846541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信