{"title":"Beyond the Screen","authors":"W. Choi, Taehyung Lee, Wonchul Kang","doi":"10.1145/3355088.3365140","DOIUrl":"https://doi.org/10.1145/3355088.3365140","url":null,"abstract":"While working on the theme park ride project, we were required to solve problems of making a projection screen as a window that shows the virtual world behind it. To create this magical effect, we developed our own image resampling pipeline called ”BeyondScreen”. For each screen, it generates a video clip that makes the audience in the ride feel like they are seeing the virtual space. It produces a sense of depth by showing hidden areas beyond the screen as the viewpoint moves. After ensuring that the algorithm works well, we developed custom plug-ins for Nuke, RenderMan, and Houdini so that it can be easily used in the existing VFX pipeline.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116322104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flexible Ray Traversal with an Extended Programming Model","authors":"Won-Jong Lee, Gabor Liktor, K. Vaidyanathan","doi":"10.1145/3355088.3365149","DOIUrl":"https://doi.org/10.1145/3355088.3365149","url":null,"abstract":"The availability of hardware-accelerated ray tracing in GPUs and standardized APIs has led to a rapid adoption of ray tracing in games. While these APIs allow programmable surface shading and intersections, most of the ray traversal is assumed to be fixed-function. As a result, the implementation of per-instance Level-of-Detail (LOD) techniques is very limited. In this paper, we propose an extended programming model for ray tracing which includes an additional programmable stage called the traversal shader that enables procedural selection of acceleration structures for instances. Using this programming model, we demonstrate multiple applications such as procedural multi-level instancing and stochastic LOD selection that can significantly reduce the bandwidth and memory footprint of ray tracing with no perceptible loss in image quality.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"345 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115970150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Outdoor Sound Propagation in Inhomogeneous Atmosphere via Precomputation","authors":"Jin Liu, Shiguang Liu","doi":"10.1145/3355088.3365168","DOIUrl":"https://doi.org/10.1145/3355088.3365168","url":null,"abstract":"Most of the sound propagation simulation methods are dedicated to room scenes, and only few of them can be used for outdoor scenes. Meanwhile, although ray tracing is used for simulation, it cannot accurately simulate some acoustic effects. In acoustics, some wave-based methods are accurate but suffer from low computational efficiency. We present a novel wave-based precomputation method that enables accurate and fast simulation of sound propagation in inhomogeneous atmosphere. An extended FDTD-PE method is used to calculate the sound pressure in 3D scene. The space is divided into two parts, the source region in which the FDTD method is employed and the far-field region in which the PE method is employed. A coupling methodology is applied at the junction between the two regions. The sound pressure data is further compressed to get the impulse response (IR) of the source region and sound attenuation function of the far-field region. Finally, we validated our method through various experiments, and the results indicate that our method can accurately simulate the sound propagation, with quite higher speed and lower storage.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122000265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Generation of Chinese Vector Fonts via Deep Layout Inferring","authors":"Yichen Gao, Z. Lian, Yingmin Tang, Jianguo Xiao","doi":"10.1145/3355088.3365142","DOIUrl":"https://doi.org/10.1145/3355088.3365142","url":null,"abstract":"Designing a high-quality Chinese vector font library which can be directly used in real applications is very time-consuming, since the font library typically consists of large amounts of glyphs. To address this problem, we propose a data-driven system in which only a small number (about 10%) of glyphs need to be designed. Specifically, the system first automatically decomposes those input glyphs into vectorized components. Then, a layout prediction module based on deep neural network is applied to learn the layout and structure information of input characters. Finally, proper components are selected to assemble each character based on the predicted layout to build the font library that can be directly used in computers and smart mobile devices. Experimental results demonstrate that our system synthesizes high-quality glyphs and significantly enhances the producing efficiency of Chinese vector fonts.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126029830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Faster RPNN: Rendering Clouds with Latent Space Light Probes","authors":"M. Panin, S. Nikolenko","doi":"10.1145/3355088.3365150","DOIUrl":"https://doi.org/10.1145/3355088.3365150","url":null,"abstract":"We introduce latent space light probes for fast rendering of high albedo anisotropic materials with multiple scattering. Our Faster RPNN model improves the performance of cloud rendering by precomputing some parts of the neural architecture, separating the parts that should be inferred at runtime. The model provides 2-3x speedup over state of the art Radiance-Predicting Neural Networks (RPNN), has negligible precomputation cost and low memory footprint, while providing results with low bias that are visually indistinguishable from computationally intensive path tracing.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115801445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Architecture of Integrated Machine Learning in Low Latency Mobile VR Graphics Pipeline","authors":"Haomiao Jiang, Rohit Rao Padebettu, Kazuki Sakamoto, Behnam Bastani","doi":"10.1145/3355088.3365154","DOIUrl":"https://doi.org/10.1145/3355088.3365154","url":null,"abstract":"In this paper, we discuss frameworks to execute machine learning algorithms in the mobile VR graphics pipeline to improve performance and rendered image quality in real time. We analyze and compare the benefits and costs of various possibilities. We illustrate the strength of using machine framework in graphics pipeline with an application of efficient spatial temporal super-resolution that amplifies GPU render power to achieve better image quality.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131403210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How NASA Uses Render Time Procedurals for Scientific Data Visualization","authors":"Kel Elkins, Gregory W. Shirah","doi":"10.1145/3355088.3365169","DOIUrl":"https://doi.org/10.1145/3355088.3365169","url":null,"abstract":"In data-driven visualizations, the size and accessibility of data files can greatly impact the computer graphics production pipeline. Loading large and complex data structures into 3D animation software such as Maya may result in system performance issues that limit interactivity. At NASA's Scientific Visualization Studio, we have implemented methods to procedurally read data files and generate graphics at render time. We accomplish this by creating per-frame calls in our animation software that are executed by the renderer. This procedural workflow accelerates visualization production and iteration.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134541660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Piku Piku Interpolation","authors":"R. Roberts, R. K. D. Anjos, K. Anjyo, J. P. Lewis","doi":"10.1145/3355088.3365156","DOIUrl":"https://doi.org/10.1145/3355088.3365156","url":null,"abstract":"We propose a sampling algorithm that reassembles real-life movements to add detail to early-stage facial animation. We examine the results of applying our algorithm with FACS data extracted from video. Using our algorithm like an interpolation scheme, animators can reduce the time required to produce detailed animation.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"CE-24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126541802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Binary Space Partitioning Visibility Tree for Polygonal Light Rendering","authors":"Hiroki Okuno, Kei Iwasaki","doi":"10.1145/3355088.3365153","DOIUrl":"https://doi.org/10.1145/3355088.3365153","url":null,"abstract":"In this paper, we present a method to render shadows for physically-based materials under polygonal light sources. Direct illumination calculation from a polygonal light source involves the triple product integral of the lighting, the bidirectional reflectance distribution function (BRDF), and the visibility function over the polygonal domain, which is computation intensive. To achieve real-time performance, work on polygonal light shading exploits analytic solutions of boundary integrals along the edges of the polygonal light at the cost of lacking shadowing effects. We introduce a hierarchical representation for the pre-computed visibility function to retain the merits of closed-form solutions for boundary integrals. Our method subdivides the polygonal light into a set of polygons visible from a point to be shaded. Experimental results show that our method can render complex shadows with a GGX microfacet BRDF from polygonal light sources at interactive frame rates.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124545319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Trottnow, S. Spielmann, T. Lange, Kelvin Chelli, Marek Solony, P. Smrz, P. Zemčík, W. Aenchbacher, M. Grogan, Martin Alain, A. Smolic, Trevor Canham, Olivier Vu-Thanh, Javier Vazquez-Corral, M. Bertalmío
{"title":"The Potential of Light Fields in Media Productions","authors":"Jonas Trottnow, S. Spielmann, T. Lange, Kelvin Chelli, Marek Solony, P. Smrz, P. Zemčík, W. Aenchbacher, M. Grogan, Martin Alain, A. Smolic, Trevor Canham, Olivier Vu-Thanh, Javier Vazquez-Corral, M. Bertalmío","doi":"10.1145/3355088.3365158","DOIUrl":"https://doi.org/10.1145/3355088.3365158","url":null,"abstract":"One aspect of the EU funded project SAUCE is to explore the possibilities and challenges of integrating light field capturing and processing into media productions. A special light field camera was build by Saarland University [Herfet et al. 2018] and is first tested under production conditions in the test production “Unfolding” as part of the SAUCE project. Filmakademie Baden-Württemberg developed the contentual frame, executed the post-production and prepared a complete previsualization. Calibration and post-processing algorithms are developed by the Trinity College Dublin and the Brno University of Technology. This document describes challenges during building and shooting with the light field camera array, as well as its potential and challenges for the post-production.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121484771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}