Multi-view Multi-modality Priors Residual Network of Depth Video Enhancement for Bandwidth Limited Asymmetric Coding Framework

Siqi Chen, Qiong Liu, You Yang
{"title":"Multi-view Multi-modality Priors Residual Network of Depth Video Enhancement for Bandwidth Limited Asymmetric Coding Framework","authors":"Siqi Chen, Qiong Liu, You Yang","doi":"10.1109/DCC.2019.00072","DOIUrl":null,"url":null,"abstract":"Asymmetric coding methodology for multi-view video plus depth is a promising technique for future three-dimensional and multi-view driven visual applications for its superior coding performance in bandwidth limited conditions. Since the depth video suffers from asymmetric distortions corresponding to viewpoint, it's a challenge in smooth and quality consistent content based interaction. To solve this challenge, we propose a residual learning framework to enhance the quality of compression distorted multi-view depth video. In this work, we exploit the correlation between viewpoints to restore the target viewpoint depth maps by using multi-modality priors, which are depth maps from adjacent viewpoints with better quality and color frames in the same viewpoint. A residual network is designed to fully exploit the contribution from these priors. Experimental results show the superiority of our framework in the quality improvement on both decoded depth video and synthesized virtual viewpoint images.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Data Compression Conference (DCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCC.2019.00072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Asymmetric coding methodology for multi-view video plus depth is a promising technique for future three-dimensional and multi-view driven visual applications for its superior coding performance in bandwidth limited conditions. Since the depth video suffers from asymmetric distortions corresponding to viewpoint, it's a challenge in smooth and quality consistent content based interaction. To solve this challenge, we propose a residual learning framework to enhance the quality of compression distorted multi-view depth video. In this work, we exploit the correlation between viewpoints to restore the target viewpoint depth maps by using multi-modality priors, which are depth maps from adjacent viewpoints with better quality and color frames in the same viewpoint. A residual network is designed to fully exploit the contribution from these priors. Experimental results show the superiority of our framework in the quality improvement on both decoded depth video and synthesized virtual viewpoint images.
带宽有限的非对称编码框架下多视点多模先验残差深度视频增强网络
多视点视频加深度的非对称编码方法在带宽有限的条件下具有优越的编码性能,是未来三维和多视点驱动视觉应用的一种很有前途的技术。由于深度视频在视点上存在不对称失真,因此在基于内容的交互中实现流畅性和质量一致性是一个挑战。为了解决这个问题,我们提出了一个残差学习框架来提高压缩失真多视点深度视频的质量。在这项工作中,我们利用视点之间的相关性,通过使用多模态先验来恢复目标视点深度图,多模态先验是来自相邻视点的深度图,在同一视点中具有更好的质量和颜色帧。残差网络的设计是为了充分利用这些先验的贡献。实验结果表明,该框架在深度解码视频和合成虚拟视点图像的质量提升方面都具有优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信