2014 IEEE Visual Communications and Image Processing Conference最新文献

筛选
英文 中文
Towards simple and smooth rate adaption for VBR video in DASH 在DASH中实现VBR视频的简单流畅的速率适应
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051491
Yanping Zhou, Y. Duan, Jun Sun, Zongming Guo
{"title":"Towards simple and smooth rate adaption for VBR video in DASH","authors":"Yanping Zhou, Y. Duan, Jun Sun, Zongming Guo","doi":"10.1109/VCIP.2014.7051491","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051491","url":null,"abstract":"Rate adaption in Dynamic Adaptive Streaming over HTTP (DASH) is widely applied to adapt the transmission rate to varying network capacity. For rate adaption on variable bitrate (VBR) encoded video, it is still a challenge to properly identify and address the dynamics of bandwidth and segment bitrate. In this paper, the trend of client buffer level variation (TBLV) is analyzed to be a more effective metric for detecting the dynamics of bandwidth and segment bitrate compared to previous metrics. Then, a partial-linear trend prediction model is developed to accurately estimate TBLV. Finally, based on the prediction model, a novel simple rate adaption algorithm is designed to achieve efficient and smooth video quality level adjustment. Experimental results show that while maintaining similar average video quality, the proposed algorithm achieves up to 47.3% improvement in rate adaption smoothness compared to the existing work.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"574 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127955125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Layer-based image completion by poisson surface reconstruction 基于图层的泊松曲面重建图像补全
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051600
Hengjin Liu, Huizhu Jia, Xiaodong Xie, Xiangyu Kong, Yuanchao Bai, Wen Gao
{"title":"Layer-based image completion by poisson surface reconstruction","authors":"Hengjin Liu, Huizhu Jia, Xiaodong Xie, Xiangyu Kong, Yuanchao Bai, Wen Gao","doi":"10.1109/VCIP.2014.7051600","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051600","url":null,"abstract":"Image completion has been widely used to repair damaged regions of a given digital image in a visually plausible way. However, it is difficult to infer appropriate information, meanwhile keep globally coherent just from the origin image when its critical parts are missing. To address this problem, we propose a novel layer-divided image completion scheme, which contains two major steps. First, we extract foregrounds of both target image and source image, and then we apply a guided Poisson surface reconstruction technique to complete the target foreground according to parameters obtained from optimal-matching calculation. Second, to fill the remaining damaged part, a related exemplar-based image completion algorithm is further devised. Several experiments and comparisons show the effectiveness and robustness of our proposed algorithm.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134205555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Four-step algorithm for early termination in HEVC inter-frame prediction based on decision trees 基于决策树的HEVC帧间预测早期终止四步算法
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051505
G. Corrêa, P. Assunção, L. Agostini, L. Cruz
{"title":"Four-step algorithm for early termination in HEVC inter-frame prediction based on decision trees","authors":"G. Corrêa, P. Assunção, L. Agostini, L. Cruz","doi":"10.1109/VCIP.2014.7051505","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051505","url":null,"abstract":"The flexible encoding structures of High Efficiency Video Coding (HEVC) are the main responsible for the improvements of the standard in terms of compression efficiency in comparison to its predecessors. However, the flexibility provided by these structures is accompanied by high levels of computational complexity, since more options are considered in a Rate-Distortion (R-D) optimization scheme. In this paper, we propose a four-step early-termination method, which decides whether the inter mode decision should be halted without testing all possibilities. The method employs a set of decision trees, which are trained offline once, using information from unconstrained HEVC encoding runs. The resulting trees present a mode decision accuracy ranging from 97.6% to 99.4% with a negligible computational overhead. The method is capable of achieving an average computational complexity decrease of 49% at the cost of a very small Bjontegaard Delta (BD)-rate increase (0.58%).","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131727162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Adaptive frame level rate control in 3D-HEVC 3D-HEVC中的自适应帧级速率控制
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051586
Songchao Tan, Junjun Si, Siwei Ma, Shanshe Wang, Wen Gao
{"title":"Adaptive frame level rate control in 3D-HEVC","authors":"Songchao Tan, Junjun Si, Siwei Ma, Shanshe Wang, Wen Gao","doi":"10.1109/VCIP.2014.7051586","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051586","url":null,"abstract":"In this paper, we propose a new frame level rate control algorithm for the high efficiency video coding (HEVC) based 3D video (3DV) compression standard. In the proposed scheme, a new initial quantization parameter (QP) decision scheme is provided, and the bit allocation for each view is investigated to smooth the bitrate fluctuation and reach accurate rate control. Meanwhile, a simplified complexity estimation method for the extended view is introduced to reduce the computational complexity while improves the coding performance. The experimental results on 3DV test sequences demonstrate that the proposed algorithm can achieve better R-D performance and more accurate rate control compared to the benchmark algorithms in HTM10.0. The maximum performance improvement can be up to 12.4% and the average BD-rate gain for each view is 5.2%, 6.5% and 6.6% respectively.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133059083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A unified framework of hash-based matching for screen content coding 基于哈希的屏幕内容编码匹配统一框架
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051623
Bin Li, Jizheng Xu, Feng Wu
{"title":"A unified framework of hash-based matching for screen content coding","authors":"Bin Li, Jizheng Xu, Feng Wu","doi":"10.1109/VCIP.2014.7051623","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051623","url":null,"abstract":"This paper introduces a unified framework of hash-based matching method for screen content coding. Screen content has some different characteristics from camera-captured content, such as large motion and repeating patterns. Hash-based matching is proposed to better explore the correlation in screen content, thus, improving the coding efficiency. The proposed method can handle both intra picture and inter picture block matching with variable block sizes in a unified framework. The proposed framework is also easy to be extended to handle other motion models to further improve the coding efficiency of screen content. We also develop fast encoding algorithms to make full use of the hash results. The experimental results show the proposed algorithm achieves about 12% bit saving while saving more than 25% encoding time. The bit saving is up to 57% and the encoding time saving is up to 60% for the proposed method.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124407145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A parallel Huffman coder on the CUDA architecture CUDA架构上的并行霍夫曼编码器
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051566
Habibelahi Rahmani, C. Topal, C. Akinlar
{"title":"A parallel Huffman coder on the CUDA architecture","authors":"Habibelahi Rahmani, C. Topal, C. Akinlar","doi":"10.1109/VCIP.2014.7051566","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051566","url":null,"abstract":"We present a parallel implementation of the widely-used entropy encoding algorithm, the Huffman coder, on the NVIDIA CUDA architecture. After constructing the Huffman codeword tree serially, we proceed in parallel by generating a byte stream where each byte represents a single bit of the compressed output stream. The final step is then to combine each consecutive 8 bytes into a single byte in parallel to generate the final compressed output bit stream. Experimental results show that we can achieve up to 22× speedups compared to the serial CPU implementation without any constraint on the maximum codeword length or data entropy.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131984794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
End to end video distortion estimation with advanced error concealment considerations 端到端视频失真估计与先进的错误隐藏的考虑
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051564
Qin Cheng, D. Agrafiotis
{"title":"End to end video distortion estimation with advanced error concealment considerations","authors":"Qin Cheng, D. Agrafiotis","doi":"10.1109/VCIP.2014.7051564","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051564","url":null,"abstract":"Video transmission over error prone channels can suffer from packet losses when channel conditions are not favourable. As a result the quality of the decoded video at the receiver often differs from that of the encoded video at the transmitter. Accurate estimation of the end to end distortion (the distortion due to compression and packet loss after decoder error concealment) at the encoder can lead to more efficient and effective application of error resilience (e.g. selective intra coding, redundant slices etc.). This paper presents an end to end distortion estimation model that incorporates a probabilistic estimation of the distortion introduced by advanced error concealment methods often used by decoders, and new decaying factors, to mitigate the effect of packets loss. The proposed model offers significant improvements in estimation accuracy relative to existing models that only consider previous frame copy as the concealment strategy of the decoder.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126401879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image transformation using limited reference with application to photo-sketch synthesis 有限参考的图像变换及其在写生合成中的应用
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051499
Wei Bai, Yanghao Li, Jiaying Liu, Zongming Guo
{"title":"Image transformation using limited reference with application to photo-sketch synthesis","authors":"Wei Bai, Yanghao Li, Jiaying Liu, Zongming Guo","doi":"10.1109/VCIP.2014.7051499","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051499","url":null,"abstract":"Image transformation refers to transforming images from a source image space to a target image space. Contemporary image transformation methods achieve this by learning coupled dictionaries from a set of paired images. However, in practical use, such paired training images are not easy to get especially when the target image style is not fixed. Thus in most cases, the reference is limited. In this paper, we propose a sparse representation based framework of transforming images with limited reference, which can be used for the typical image transformation application, photo-sketch synthesis. In the learning stage, the edge features are utilized to map patches between different style images, thus building the coupled database for dictionary learning. In the reconstruction stage, sparse representation can well preserve the basic structure of image contents. In addition, a texture synthesis strategy is introduced to enhance target-like textures in the output image. Experimental results show that the performance of our method is comparable to state-of-the-art methods even with limited reference, which is very efficient and less restrictive for practical use.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"7 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114106910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visual saliency guided mode decision in video compression based on Laplace distribution of DCT coefficients 基于DCT系数拉普拉斯分布的视频压缩中视觉显著性引导模式决策
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051613
S. Wulf, U. Zölzer
{"title":"Visual saliency guided mode decision in video compression based on Laplace distribution of DCT coefficients","authors":"S. Wulf, U. Zölzer","doi":"10.1109/VCIP.2014.7051613","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051613","url":null,"abstract":"A visual saliency based approach for macroblock (MB) mode selection in video compression is presented. It is based on the Laplace distribution of transformed residuals. Visual saliency of a MB is used to make the encoder vote for a mode which requires less bits in a region which is visually less important. It shows a gain of up to 0.6 dB in terms of PSNR compared to conventional rate-distortion optimization (RDO) procedure in H.264/AVC. Moreover, it suggests to be a competitive alternative to a well-established perceptually based video coding method. At the same time it has no negative impact on the PSNR.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"338 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114240290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A new frame interpolation method with pixel-level motion vector field 基于像素级运动矢量场的帧插值新方法
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051578
Chuanxin Tang, Ronggang Wang, Wenmin Wang, Wen Gao
{"title":"A new frame interpolation method with pixel-level motion vector field","authors":"Chuanxin Tang, Ronggang Wang, Wenmin Wang, Wen Gao","doi":"10.1109/VCIP.2014.7051578","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051578","url":null,"abstract":"In this paper, a new frame interpolation method with pixel-level motion vector field (MVF) is proposed. Given that existing methods cannot handle occlusions and blocking artifacts well, there are three contributions in our method: (i) applying the pixel-level motion vectors (MVs) estimated by optical flow algorithm to eliminate blocking artifacts (ii) motion post-processing to keep spatial consistency (iii) robust warping method to address collisions and holes caused by occlusions. The method could remove blocking artifacts and alleviate the artifacts caused by occlusions. Experimental results show that the proposed method outperforms existing methods both in terms of objective and subjective performances, especially for sequences with complex motions.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"37 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120914563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信