28th Picture Coding Symposium最新文献

筛选
英文 中文
Temporal signal energy correction and low-complexity encoder feedback for lossy scalable video coding 时间信号能量校正和低复杂度编码器反馈的有损可扩展视频编码
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702518
Marijn J. H. Loomans, Cornelis J. Koeleman, P. D. With
{"title":"Temporal signal energy correction and low-complexity encoder feedback for lossy scalable video coding","authors":"Marijn J. H. Loomans, Cornelis J. Koeleman, P. D. With","doi":"10.1109/PCS.2010.5702518","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702518","url":null,"abstract":"In this paper, we address two problems found in embedded implementations of Scalable Video Codecs (SVCs): the temporal signal energy distribution and frame-to-frame quality fluctuations. The unequal energy distribution between the low- and high-pass band with integer-based wavelets leads to sub-optimal rate-distortion choices coupled with quantization-error accumulations. The second problem is the quality fluctuation between frames within a Group Of Pictures (GOP). To solve these two problems, we present two modifications to the SVC. The first modification aims at a temporal energy correction of the lifting scheme in the temporal wavelet decomposition. By moving this energy correction to the leaves of the temporal tree, we can save on required memory size, bandwidth and computations, while reducing floating/fixed-point conversion errors. The second modification feeds back the decoded first frame of the GOP (the temporal low-pass) into the temporal coding chain. The decoding of the first frame is achieved without entropy decoding while avoiding any required modifications at the decoder. Experiments show that quality fluctuations within the GOP are significantly reduced, thereby significantly increasing the subjective visual quality. On top of this, a small quality improvement is achieved on average.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116295053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast rate-distortion optimized transform for Intra coding 快速率失真优化变换的Intra编码
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702552
Xin Zhao, Li Zhang, Siwei Ma, Wen Gao
{"title":"Fast rate-distortion optimized transform for Intra coding","authors":"Xin Zhao, Li Zhang, Siwei Ma, Wen Gao","doi":"10.1109/PCS.2010.5702552","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702552","url":null,"abstract":"In our previous work, the rate-distortion optimized transform (RDOT) is introduced for Intra coding, which is featured by the usage of multiple offline-trained transform matrix candidates. The proposed RDOT achieves remarkable coding gain for KTA Intra coding, while maintaining almost the same computational complexity at the decoder. However, at the encoder, the computational complexity is increased drastically by the expensive ratedistortion (R-D) optimized selection of transform matrix. To resolve this problem, in this paper, we propose a fast RDOT scheme using macroblock- and block-level R-D cost thresholding. With the proposed method, unnecessary mode trials and R-D evaluations of transform matrices can be efficiently skipped from the mode decision process. Extensive experimental results show that, with negligible coding performance degradation, about 88.9% of the total encoding time is saved by the proposed method.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125471426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Suppressing texture-depth misalignment for boundary noise removal in view synthesis 抑制纹理深度偏差,消除视图合成中的边界噪声
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702494
Yin Zhao, Zhenzhong Chen, Dong Tian, Ce Zhu, Lu Yu
{"title":"Suppressing texture-depth misalignment for boundary noise removal in view synthesis","authors":"Yin Zhao, Zhenzhong Chen, Dong Tian, Ce Zhu, Lu Yu","doi":"10.1109/PCS.2010.5702494","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702494","url":null,"abstract":"During view synthesis based on depth maps, also known as Depth-Image-Based Rendering (DIBR), annoying artifacts are often generated around foreground objects, yielding the visual effects that slim silhouettes of foreground objects are scattered into the background. The artifacts are referred as the boundary noises. We investigate the cause of boundary noises, and find out that they result from the misalignment between texture and depth information along object boundaries. Accordingly, we propose a novel solution to remove such boundary noises by applying restrictions during forward warping on the pixels within the texture-depth misalignment regions. Experiments show this algorithm can effectively eliminate most boundary noises and it is also robust for view synthesis with compressed depth and texture information.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126224952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Colorization-based coding by focusing on characteristics of colorization bases 基于着色的编码,重点关注着色基的特征
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702473
Shunsuke Ono, T. Miyata, Y. Sakai
{"title":"Colorization-based coding by focusing on characteristics of colorization bases","authors":"Shunsuke Ono, T. Miyata, Y. Sakai","doi":"10.1109/PCS.2010.5702473","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702473","url":null,"abstract":"Colorization is a method that adds color components to a grayscale image using only a few representative pixels provided by the user. A novel approach to image compression called colorization-based coding has recently been proposed. It automatically extracts representative pixels from an original color image at an encoder and restores a full color image by using colorization at a decoder. However, previous studies on colorization-based coding extract redundant representative pixels and do not extract the pixels required for suppressing coding error. This paper focuses on the colorization basis that restricts the decoded color components. From this viewpoint, we propose a new colorization-based coding method. Experimental results revealed that our method can drastically suppress the information amount (number of representative pixels) compared conventional colorization based-coding while objective quality is maintained.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121800821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Entropy coding in video compression using probability interval partitioning 基于概率间隔分割的视频压缩熵编码
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702580
D. Marpe, H. Schwarz, T. Wiegand
{"title":"Entropy coding in video compression using probability interval partitioning","authors":"D. Marpe, H. Schwarz, T. Wiegand","doi":"10.1109/PCS.2010.5702580","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702580","url":null,"abstract":"We present a novel approach to entropy coding, which provides the coding efficiency and simple probability modeling capability of arithmetic coding at the complexity level of Huffman coding. The key element of the proposed approach is a partitioning of the unit interval into a small set of probability intervals. An input sequence of discrete source symbols is mapped to a sequence of binary symbols and each of the binary symbols is assigned to one of the probability intervals. The binary symbols that are assigned to a particular probability interval are coded at a fixed probability using a simple code that maps a variable number of binary symbols to variable length codewords. The probability modeling is decoupled from the actual binary entropy coding. The coding efficiency of the probability interval partitioning entropy (PIPE) coding is comparable to that of arithmetic coding.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128274422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Parallel processing method for realtime FTV 实时FTV的并行处理方法
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702500
Kazuma Suzuki, Norishige Fukushima, T. Yendo, M. P. Tehrani, T. Fujii, M. Tanimoto
{"title":"Parallel processing method for realtime FTV","authors":"Kazuma Suzuki, Norishige Fukushima, T. Yendo, M. P. Tehrani, T. Fujii, M. Tanimoto","doi":"10.1109/PCS.2010.5702500","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702500","url":null,"abstract":"In this paper, we propose a parallel processing method to generate free viewpoint image in realtime. It is impossible to arrange the cameras in a high density realistically though it is necessary to capture images of the scene from innumerable cameras to express the free viewpoint image. Therefore, it is necessary to interpolate the image of arbitrary viewpoint from limited captured images. However, this process has the relation of the trade-off between the image quality and the computing time. In proposed method, it aimed to generate the high-quality free viewpoint image in realtime by applying the parallel processing to time-consuming interpolation part.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114988078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Stereoscopic depth estimation using fuzzy segment matching 基于模糊分段匹配的立体深度估计
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702443
K. Wegner, O. Stankiewicz, M. Domański
{"title":"Stereoscopic depth estimation using fuzzy segment matching","authors":"K. Wegner, O. Stankiewicz, M. Domański","doi":"10.1109/PCS.2010.5702443","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702443","url":null,"abstract":"Stereo matching techniques usually match segments or blocks of pixels. This paper proposes to match segments defined as fuzzy sets of pixels. The proposed matching method is applicable to various techniques of stereo matching as well as to different measures of differences between pixels. In the paper, embedment of this approach into the state-of-the-art depth estimation software is described. Obtained experimental results show that the proposed way of stereo matching increases reliability of various depth estimation techniques.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132455746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Technical design & IPR analysis for royalty-free video codecs 免版税视频编解码器的技术设计和知识产权分析
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702433
C. Reader
{"title":"Technical design & IPR analysis for royalty-free video codecs","authors":"C. Reader","doi":"10.1109/PCS.2010.5702433","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702433","url":null,"abstract":"Royalty-free standards for image and video coding have been actively discussed for over 20 years. This paper breaks down the issues of designing royalty-free codecs into the major topics of requirements, video coding tools, classes of patents and performance. By dissecting the codec using a hierarchy of major to minor coding tools, it is possible to pinpoint where a patent impacts the video coding, and what the consequence will be of avoiding the patented tool.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131063036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Free-viewpoint image generation using different focal length camera array 利用不同焦距相机阵列生成自由视点图像
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702508
Kengo Ando, Norishige Fukushima, T. Yendo, M. P. Tehrani, T. Fujii, M. Tanimoto
{"title":"Free-viewpoint image generation using different focal length camera array","authors":"Kengo Ando, Norishige Fukushima, T. Yendo, M. P. Tehrani, T. Fujii, M. Tanimoto","doi":"10.1109/PCS.2010.5702508","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702508","url":null,"abstract":"The availability of multi-view images of a scene makes new and exciting applications possible, including Free-Viewpoint TV (FTV). FTV allows us to change viewpoint freely in a 3D world, where the virtual viewpoint images are synthesized by Image-Based Rendering (IBR). In this paper, we introduce a FTV depth estimation method for forward virtual viewpoints. Moreover, we introduce a view generation method by using a zoom camera in our camera setup to improve virtual viewpoint-ts' image quality. Simulation results confirm reduced error during depth estimation using our proposed method in comparison with conventional stereo matching scheme. We have demonstrated the improvement in image resolution of virtually moved forward camera using a zoom camera setup.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134139835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Bit-plane compressive sensing with Bayesian decoding for lossy compression 有损压缩的位平面压缩感知与贝叶斯解码
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702577
Sz-Hsien Wu, Wen-Hsiao Peng, Tihao Chiang
{"title":"Bit-plane compressive sensing with Bayesian decoding for lossy compression","authors":"Sz-Hsien Wu, Wen-Hsiao Peng, Tihao Chiang","doi":"10.1109/PCS.2010.5702577","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702577","url":null,"abstract":"This paper addresses the problem of reconstructing a com-pressively sampled sparse signal from its lossy and possibly insufficient measurements. The process involves estimations of sparsity pattern and sparse representation, for which we derived a vector estimator based on the Maximum a Posteriori Probability (MAP) rule. By making full use of signal prior knowledge, our scheme can use a measurement number close to sparsity to achieve perfect reconstruction. It also shows a much lower error probability of sparsity pattern than prior work, given insufficient measurements. To better recover the most significant part of the sparse representation, we further introduce the notion of bit-plane separation. When applied to image compression, the technique in combination with our MAP estimator shows promising results as compared to JPEG: the difference in compression ratio is seen to be within a factor of two, given the same decoded quality.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132136959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信