{"title":"Multi-view Multi-modality Priors Residual Network of Depth Video Enhancement for Bandwidth Limited Asymmetric Coding Framework","authors":"Siqi Chen, Qiong Liu, You Yang","doi":"10.1109/DCC.2019.00072","DOIUrl":null,"url":null,"abstract":"Asymmetric coding methodology for multi-view video plus depth is a promising technique for future three-dimensional and multi-view driven visual applications for its superior coding performance in bandwidth limited conditions. Since the depth video suffers from asymmetric distortions corresponding to viewpoint, it's a challenge in smooth and quality consistent content based interaction. To solve this challenge, we propose a residual learning framework to enhance the quality of compression distorted multi-view depth video. In this work, we exploit the correlation between viewpoints to restore the target viewpoint depth maps by using multi-modality priors, which are depth maps from adjacent viewpoints with better quality and color frames in the same viewpoint. A residual network is designed to fully exploit the contribution from these priors. Experimental results show the superiority of our framework in the quality improvement on both decoded depth video and synthesized virtual viewpoint images.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Data Compression Conference (DCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCC.2019.00072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Asymmetric coding methodology for multi-view video plus depth is a promising technique for future three-dimensional and multi-view driven visual applications for its superior coding performance in bandwidth limited conditions. Since the depth video suffers from asymmetric distortions corresponding to viewpoint, it's a challenge in smooth and quality consistent content based interaction. To solve this challenge, we propose a residual learning framework to enhance the quality of compression distorted multi-view depth video. In this work, we exploit the correlation between viewpoints to restore the target viewpoint depth maps by using multi-modality priors, which are depth maps from adjacent viewpoints with better quality and color frames in the same viewpoint. A residual network is designed to fully exploit the contribution from these priors. Experimental results show the superiority of our framework in the quality improvement on both decoded depth video and synthesized virtual viewpoint images.