Photo-Realistic Streamable Free-Viewpoint Video

Shaohui Jiao, Yuzhong Chen, Zhaoliang Liu, Danying Wang, Wen-Hui Zhou, Li Zhang, Yue Wang
{"title":"Photo-Realistic Streamable Free-Viewpoint Video","authors":"Shaohui Jiao, Yuzhong Chen, Zhaoliang Liu, Danying Wang, Wen-Hui Zhou, Li Zhang, Yue Wang","doi":"10.1145/3588028.3603666","DOIUrl":null,"url":null,"abstract":"We present a novel free-viewpoint video(FVV) framework for capturing, processing and compressing the volumetric content for immersive VR/AR experience. Compared to previous FVV capture systems, we propose an easy-to-use multi-camera array consisting of mobile phones with time synchronization. In order to generate photo-realistic FVV results with sparse multi-camera input, we improve the novel view synthesis method by introducing visual hull guided neural representation, called VH-NeRF. Our VH-NeRF combines the advantages of both explicit models by traditional 3D reconstruction and the notable implicit representation of Neural Radiance Field. Each dynamic entity’s VH-NeRF is learned and supervised by the visual hull reconstructed data, and can be further edited for complex and large-scale dynamic scenes. Moreover, our FVV solution can do both effective compression and transmission on multi-perspective videos, as well as real-time rendering on consumer-grade hardware. To the best of our knowledge, our work is the first solution for photo-realistic FVV captured by sparse multi-camera array, and allow real-time live streaming of large-scale dynamic scenes for immersive VR and AR applications on mobile devices.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH 2023 Posters","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3588028.3603666","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We present a novel free-viewpoint video(FVV) framework for capturing, processing and compressing the volumetric content for immersive VR/AR experience. Compared to previous FVV capture systems, we propose an easy-to-use multi-camera array consisting of mobile phones with time synchronization. In order to generate photo-realistic FVV results with sparse multi-camera input, we improve the novel view synthesis method by introducing visual hull guided neural representation, called VH-NeRF. Our VH-NeRF combines the advantages of both explicit models by traditional 3D reconstruction and the notable implicit representation of Neural Radiance Field. Each dynamic entity’s VH-NeRF is learned and supervised by the visual hull reconstructed data, and can be further edited for complex and large-scale dynamic scenes. Moreover, our FVV solution can do both effective compression and transmission on multi-perspective videos, as well as real-time rendering on consumer-grade hardware. To the best of our knowledge, our work is the first solution for photo-realistic FVV captured by sparse multi-camera array, and allow real-time live streaming of large-scale dynamic scenes for immersive VR and AR applications on mobile devices.
照片逼真的流媒体自由视点视频
我们提出了一种新的自由视点视频(FVV)框架,用于捕获,处理和压缩身临其境的VR/AR体验的体积内容。与以往的FVV捕获系统相比,我们提出了一种易于使用的多相机阵列,由手机组成,具有时间同步。为了在稀疏多摄像机输入下生成逼真的FVV结果,我们改进了一种新的视图合成方法,引入视觉船体引导神经表示(VH-NeRF)。我们的VH-NeRF结合了传统3D重建的显式模型和显著的隐式神经辐射场表示的优点。每个动态实体的VH-NeRF由可视化船体重构数据学习和监督,并可针对复杂和大规模的动态场景进行进一步编辑。此外,我们的FVV解决方案既可以对多视角视频进行有效的压缩和传输,也可以在消费级硬件上进行实时渲染。据我们所知,我们的工作是第一个通过稀疏多相机阵列捕获的逼真FVV的解决方案,并允许在移动设备上为沉浸式VR和AR应用实时直播大规模动态场景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信