{"title":"Efficient synthetic light field generation using adaptive multi-level rendering","authors":"Liang-Chi Tseng, W. Hsu","doi":"10.1145/3264746.3264759","DOIUrl":null,"url":null,"abstract":"Real-time global illumination rendering is very desirable for emerging applications such as Virtual Reality (VR) and Augmented Reality (AR). However, client devices have difficulties to support photorealistic rendering, such as Ray-Tracing, due to insufficient computing resources. Many modern frameworks adopted Light Field rendering to support device displaying. A Light Field can be pre-computed and store in cloud. During runtime, the display extracts the colors from the Light Field to generate arbitrary real time viewpoints or re-focusing within a predefined area. To efficiently compute the Light Field, We have combined DIBR (Depth-Image-Based-Rendering) and traditional ray-tracing in an adaptive fashion to synthesize images. By measuring the color errors during runtime, we adaptively determine the right balance between DIBR and Ray Tracing. To further optimize the computation efficiency, we also added a multi-level design to exploit the degree of shareable pixels among images to control the computation for error removal. Experiments show that we achieved up to 3.24X speedup in Light Field generation for relative simple scenes like Cornell Box, and about 2X speed up for complex scenes like Conference Room or Sponza.","PeriodicalId":186790,"journal":{"name":"Proceedings of the 2018 Conference on Research in Adaptive and Convergent Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 Conference on Research in Adaptive and Convergent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3264746.3264759","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Real-time global illumination rendering is very desirable for emerging applications such as Virtual Reality (VR) and Augmented Reality (AR). However, client devices have difficulties to support photorealistic rendering, such as Ray-Tracing, due to insufficient computing resources. Many modern frameworks adopted Light Field rendering to support device displaying. A Light Field can be pre-computed and store in cloud. During runtime, the display extracts the colors from the Light Field to generate arbitrary real time viewpoints or re-focusing within a predefined area. To efficiently compute the Light Field, We have combined DIBR (Depth-Image-Based-Rendering) and traditional ray-tracing in an adaptive fashion to synthesize images. By measuring the color errors during runtime, we adaptively determine the right balance between DIBR and Ray Tracing. To further optimize the computation efficiency, we also added a multi-level design to exploit the degree of shareable pixels among images to control the computation for error removal. Experiments show that we achieved up to 3.24X speedup in Light Field generation for relative simple scenes like Cornell Box, and about 2X speed up for complex scenes like Conference Room or Sponza.