View synthesis prediction via motion field synthesis for 3D video coding

S. Shimizu, Shiori Sugimoto, Akira Kojima
{"title":"View synthesis prediction via motion field synthesis for 3D video coding","authors":"S. Shimizu, Shiori Sugimoto, Akira Kojima","doi":"10.1109/VCIP.2014.7051530","DOIUrl":null,"url":null,"abstract":"View synthesis prediction is critical for efficient compression of 3D video, which consists of multiview video and depth maps. However, its performance is limited in practical situations since it is necessary to use erroneous depth information and to perform block-based compensation instead of pixel-based warping. This paper proposes a novel view synthesis prediction scheme where motion field is synthesized by utilizing coarse disparity filed derived from erroneous depth information. As part of the proposed depth-based motion field synthesis, occlusion-aware backward mapping and 3D motion field warping are performed. In order to improve prediction performance with block-based compensation, an adaptive prediction sample generation that utilizes both temporal and inter-view correlations is also proposed. Experiments show that the proposed scheme achieves average bitrate reductions of 1.38% and 1.19% for coded views and synthesized views. The maximum gain is 11.57% for dependent view.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Visual Communications and Image Processing Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP.2014.7051530","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

View synthesis prediction is critical for efficient compression of 3D video, which consists of multiview video and depth maps. However, its performance is limited in practical situations since it is necessary to use erroneous depth information and to perform block-based compensation instead of pixel-based warping. This paper proposes a novel view synthesis prediction scheme where motion field is synthesized by utilizing coarse disparity filed derived from erroneous depth information. As part of the proposed depth-based motion field synthesis, occlusion-aware backward mapping and 3D motion field warping are performed. In order to improve prediction performance with block-based compensation, an adaptive prediction sample generation that utilizes both temporal and inter-view correlations is also proposed. Experiments show that the proposed scheme achieves average bitrate reductions of 1.38% and 1.19% for coded views and synthesized views. The maximum gain is 11.57% for dependent view.
通过运动场合成预测3D视频编码
三维视频由多视点视频和深度图组成,视图合成预测是有效压缩三维视频的关键。然而,它的性能在实际情况下是有限的,因为必须使用错误的深度信息和执行基于块的补偿而不是基于像素的翘曲。提出了一种利用错误深度信息产生的粗视差场合成运动场的视觉综合预测方案。作为提出的基于深度的运动场合成的一部分,进行了闭塞感知的向后映射和3D运动场翘曲。为了提高基于块补偿的预测性能,还提出了一种同时利用时间和视图间相关性的自适应预测样本生成方法。实验表明,该方案对编码视图和合成视图的平均比特率分别降低了1.38%和1.19%。依赖视图的最大增益为11.57%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信