View-dependent Scene Appearance Synthesis using Inverse Rendering from Light Fields

Dahyun Kang, D. S. Jeon, Hak-Il Kim, Hyeonjoong Jang, Min H. Kim
{"title":"View-dependent Scene Appearance Synthesis using Inverse Rendering from Light Fields","authors":"Dahyun Kang, D. S. Jeon, Hak-Il Kim, Hyeonjoong Jang, Min H. Kim","doi":"10.1109/ICCP51581.2021.9466274","DOIUrl":null,"url":null,"abstract":"In order to enable view-dependent appearance synthesis from the light fields of a scene, it is critical to evaluate the geometric relationships between light and view over surfaces in the scene with high accuracy. Perfect diffuse reflectance is commonly assumed to estimate geometry from light fields via multiview stereo. However, this diffuse surface assumption is invalid with real-world objects. Geometry estimated from light fields is severely degraded over specular surfaces. Additional scene-scale 3D scanning based on active illumination could provide reliable geometry, but it is sparse and thus still insufficient to calculate view-dependent appearance, such as specular reflection, in geometry-based view synthesis. In this work, we present a practical solution of inverse rendering to enable view-dependent appearance synthesis, particularly of scene scale. We enhance the scene geometry by eliminating the specular component, thus enforcing photometric consistency. We then estimate spatially-varying parameters of diffuse, specular, and normal components from wide-baseline light fields. To validate our method, we built a wide-baseline light field imaging prototype that consists of 32 machine vision cameras with fisheye lenses of 185 degrees that cover the forward hemispherical appearance of scenes. We captured various indoor scenes, and results validate that our method can estimate scene geometry and reflectance parameters with high accuracy, enabling view-dependent appearance synthesis at scene scale with high fidelity, i.e., specular reflection changes according to a virtual viewpoint.","PeriodicalId":132124,"journal":{"name":"2021 IEEE International Conference on Computational Photography (ICCP)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Computational Photography (ICCP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCP51581.2021.9466274","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

In order to enable view-dependent appearance synthesis from the light fields of a scene, it is critical to evaluate the geometric relationships between light and view over surfaces in the scene with high accuracy. Perfect diffuse reflectance is commonly assumed to estimate geometry from light fields via multiview stereo. However, this diffuse surface assumption is invalid with real-world objects. Geometry estimated from light fields is severely degraded over specular surfaces. Additional scene-scale 3D scanning based on active illumination could provide reliable geometry, but it is sparse and thus still insufficient to calculate view-dependent appearance, such as specular reflection, in geometry-based view synthesis. In this work, we present a practical solution of inverse rendering to enable view-dependent appearance synthesis, particularly of scene scale. We enhance the scene geometry by eliminating the specular component, thus enforcing photometric consistency. We then estimate spatially-varying parameters of diffuse, specular, and normal components from wide-baseline light fields. To validate our method, we built a wide-baseline light field imaging prototype that consists of 32 machine vision cameras with fisheye lenses of 185 degrees that cover the forward hemispherical appearance of scenes. We captured various indoor scenes, and results validate that our method can estimate scene geometry and reflectance parameters with high accuracy, enabling view-dependent appearance synthesis at scene scale with high fidelity, i.e., specular reflection changes according to a virtual viewpoint.
使用光场的反向渲染来合成依赖于视图的场景外观
为了从场景的光场中实现依赖于视图的外观合成,以高精度评估场景中表面上的光和视图之间的几何关系至关重要。完美漫反射通常被假定为通过多视点立体从光场估计几何。然而,这种漫射表面假设对于现实世界的物体是无效的。从光场估计的几何形状在镜面上严重退化。额外的基于主动照明的场景尺度3D扫描可以提供可靠的几何,但它是稀疏的,因此仍然不足以计算基于几何的视图合成中依赖于视图的外观,例如镜面反射。在这项工作中,我们提出了一种实用的反渲染解决方案,以实现依赖于视图的外观合成,特别是场景规模。我们通过消除镜面组件来增强场景几何,从而加强光度一致性。然后,我们从宽基线光场估计漫射、镜面和正常分量的空间变化参数。为了验证我们的方法,我们构建了一个宽基线光场成像原型,该原型由32个带有185度鱼眼镜头的机器视觉相机组成,覆盖了场景的前半球形外观。我们捕获了各种室内场景,结果验证了我们的方法可以高精度地估计场景几何形状和反射率参数,从而实现高保真度的场景尺度下依赖于视图的外观合成,即根据虚拟视点进行镜面反射变化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信