{"title":"View synthesis prediction via motion field synthesis for 3D video coding","authors":"S. Shimizu, Shiori Sugimoto, Akira Kojima","doi":"10.1109/VCIP.2014.7051530","DOIUrl":null,"url":null,"abstract":"View synthesis prediction is critical for efficient compression of 3D video, which consists of multiview video and depth maps. However, its performance is limited in practical situations since it is necessary to use erroneous depth information and to perform block-based compensation instead of pixel-based warping. This paper proposes a novel view synthesis prediction scheme where motion field is synthesized by utilizing coarse disparity filed derived from erroneous depth information. As part of the proposed depth-based motion field synthesis, occlusion-aware backward mapping and 3D motion field warping are performed. In order to improve prediction performance with block-based compensation, an adaptive prediction sample generation that utilizes both temporal and inter-view correlations is also proposed. Experiments show that the proposed scheme achieves average bitrate reductions of 1.38% and 1.19% for coded views and synthesized views. The maximum gain is 11.57% for dependent view.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Visual Communications and Image Processing Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP.2014.7051530","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
View synthesis prediction is critical for efficient compression of 3D video, which consists of multiview video and depth maps. However, its performance is limited in practical situations since it is necessary to use erroneous depth information and to perform block-based compensation instead of pixel-based warping. This paper proposes a novel view synthesis prediction scheme where motion field is synthesized by utilizing coarse disparity filed derived from erroneous depth information. As part of the proposed depth-based motion field synthesis, occlusion-aware backward mapping and 3D motion field warping are performed. In order to improve prediction performance with block-based compensation, an adaptive prediction sample generation that utilizes both temporal and inter-view correlations is also proposed. Experiments show that the proposed scheme achieves average bitrate reductions of 1.38% and 1.19% for coded views and synthesized views. The maximum gain is 11.57% for dependent view.