使用动态全方位纹理合成创建虚拟内容

Chih-Fan Chen, Evan Suma Rosenberg
{"title":"使用动态全方位纹理合成创建虚拟内容","authors":"Chih-Fan Chen, Evan Suma Rosenberg","doi":"10.1109/VR.2018.8446410","DOIUrl":null,"url":null,"abstract":"We present a dynamic omnidirectional texture synthesis (DOTS) approach for generating real-time virtual reality content captured using a consumer-grade RGB-D camera. Compared to a single fixed-viewpoint color map, view-dependent texture mapping (VDTM) techniques can reproduce finer detail and replicate dynamic lighting effects that become especially noticeable with head tracking in virtual reality. However, VDTM is very sensitive to errors such as missing data or inaccurate camera pose estimation, both of which are commonplace for objects captured using consumer-grade RGB-D cameras. To overcome these limitations, our proposed optimization can synthesize a high resolution view-dependent texture map for any virtual camera location. Synthetic textures are generated by uniformly sampling a spherical virtual camera set surrounding the virtual object, thereby enabling efficient real-time rendering for all potential viewing directions.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Virtual Content Creation Using Dynamic Omnidirectional Texture Synthesis\",\"authors\":\"Chih-Fan Chen, Evan Suma Rosenberg\",\"doi\":\"10.1109/VR.2018.8446410\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a dynamic omnidirectional texture synthesis (DOTS) approach for generating real-time virtual reality content captured using a consumer-grade RGB-D camera. Compared to a single fixed-viewpoint color map, view-dependent texture mapping (VDTM) techniques can reproduce finer detail and replicate dynamic lighting effects that become especially noticeable with head tracking in virtual reality. However, VDTM is very sensitive to errors such as missing data or inaccurate camera pose estimation, both of which are commonplace for objects captured using consumer-grade RGB-D cameras. To overcome these limitations, our proposed optimization can synthesize a high resolution view-dependent texture map for any virtual camera location. Synthetic textures are generated by uniformly sampling a spherical virtual camera set surrounding the virtual object, thereby enabling efficient real-time rendering for all potential viewing directions.\",\"PeriodicalId\":355048,\"journal\":{\"name\":\"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VR.2018.8446410\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR.2018.8446410","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

我们提出了一种动态全向纹理合成(DOTS)方法,用于生成使用消费级RGB-D相机捕获的实时虚拟现实内容。与单一的固定视点颜色图相比,视点相关纹理映射(VDTM)技术可以重现更精细的细节,并复制动态照明效果,这在虚拟现实中的头部跟踪中变得特别明显。然而,VDTM对数据丢失或相机姿势估计不准确等错误非常敏感,这两种错误对于使用消费级RGB-D相机拍摄的对象来说都是常见的。为了克服这些限制,我们提出的优化可以为任何虚拟摄像机位置合成高分辨率的视相关纹理图。合成纹理是通过对虚拟物体周围的球形虚拟相机集进行均匀采样而生成的,从而能够对所有潜在的观看方向进行有效的实时渲染。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Virtual Content Creation Using Dynamic Omnidirectional Texture Synthesis
We present a dynamic omnidirectional texture synthesis (DOTS) approach for generating real-time virtual reality content captured using a consumer-grade RGB-D camera. Compared to a single fixed-viewpoint color map, view-dependent texture mapping (VDTM) techniques can reproduce finer detail and replicate dynamic lighting effects that become especially noticeable with head tracking in virtual reality. However, VDTM is very sensitive to errors such as missing data or inaccurate camera pose estimation, both of which are commonplace for objects captured using consumer-grade RGB-D cameras. To overcome these limitations, our proposed optimization can synthesize a high resolution view-dependent texture map for any virtual camera location. Synthetic textures are generated by uniformly sampling a spherical virtual camera set surrounding the virtual object, thereby enabling efficient real-time rendering for all potential viewing directions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信