{"title":"Adaptive rendering based on weighted local regression","authors":"Bochang Moon, N. Carr, Sung-eui Yoon","doi":"10.1145/2614106.2614160","DOIUrl":"https://doi.org/10.1145/2614106.2614160","url":null,"abstract":"Monte Carlo ray tracing is considered one of the most effective techniques for rendering photo-realistic imagery, but requires a large number of ray samples to produce converged or even visually pleasing images. We develop a novel image-plane adaptive sampling and reconstruction method based on local regression theory. A novel local space estimation process is proposed for employing the local regression, by robustly addressing noisy high-dimensional features. Given the local regression on estimated local space, we provide a novel two-step optimization process for selecting band-widths of features locally in a data-driven way. Local weighted regression is then applied using the computed bandwidths to produce a smooth image reconstruction with well-preserved details. We derive an error analysis to guide our adaptive sampling process at the local space. We demonstrate that our method produces more accurate and visually pleasing results over the state-of-the-art techniques across a wide range of rendering effects. Our method also allows users to employ an arbitrary set of features, including noisy features, and robustly computes a subset of them by ignoring noisy features and decorrelating them for higher quality. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129818994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Buron, Jean-Eudes Marvie, Gaël Guennebaud, Xavier Granier
{"title":"Dynamic on-mesh procedural generation control","authors":"C. Buron, Jean-Eudes Marvie, Gaël Guennebaud, Xavier Granier","doi":"10.1145/2614106.2614129","DOIUrl":"https://doi.org/10.1145/2614106.2614129","url":null,"abstract":"Procedural representations are powerful tools to generate highly detailed objects through amplification rules. However controlling such rules within environment contexts (e.g., growth on shapes) is restricted to CPU-based methods, leading to limited performances. To interactively control shape grammars, we introduce a novel approach based on a marching rule on the GPU. Environment contexts are encoded as geometry texture atlases, on which indirection pixels are computed around each chart borders. At run-time, the new rule is used to march through the texture atlas and efficiently jumps from chart to chart using indirection information. The underlying surface is thus followed during the grammar development. Moreover, additional texture information can be used to easily constrain the grammar interpretation. For instance, one can paint directly on the mesh allowed growth areas or the leaves density, and observe the procedural model adapt on-the-fly to this new environment. Finally, to preserve smooth geometry deformation at shape instantiation stage, we use cubic Bezier curves computed using a depth-first grammar traversal.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"4 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121000078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DreamWorks animation's face system, a historical perspective: from ANTZ and Shrek to Mr Peabody & Sherman","authors":"L. Modesto, D. Walsh","doi":"10.1145/2614106.2614131","DOIUrl":"https://doi.org/10.1145/2614106.2614131","url":null,"abstract":"We present an overview of the Academy Award winning Facial Animation System utilized by DreamWorks animation in most of its movies since 1997 from Antz and Shrek to Mr Peabody & Sherman. The presentation will cover the concept utilized and its application in the creation of the system. As the requirements and challenges of each new movie change constantly, the necessary evolution and adaptation of the system will also be discussed.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117086290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A dual-beam 3D searchlight BSSRDF","authors":"Eugene d'Eon","doi":"10.1145/2614106.2614140","DOIUrl":"https://doi.org/10.1145/2614106.2614140","url":null,"abstract":"classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2960-6/14/08 A Dual-Beam 3D Searchlight BSSRDF","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126893593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Level of detail in an age of GI: rethinking crowd rendering","authors":"Paul Kanyuk","doi":"10.1145/2614106.2614119","DOIUrl":"https://doi.org/10.1145/2614106.2614119","url":null,"abstract":"In the past few years, the feature film animation and vfx community have made a marked move toward single pass, physically based, global illumination (GI) rendering. Improvements in hardware, software, and algorithms, have allowed studios simplify their lighting setups and produce more photorealistic imagery. But woe to the early adopters who thought they could render massive scenes using the same techniques that allowed rasterizers to churn through near unlimited complexity. Anecdotes across studios, using a variety of pipelines/renderers, suggest that 100+ core hour renders have not been uncommon, leading to limited iteration, rushed hardware purchases, and panicky producers. Pixar had a brush with such panic on the film Monsters University, where rendering crowds of highly detailed monsters, revealed the challenges of scale inherent to the in core nature of physically plausible rendering. This launched an effort to re-evaluate the various level of detail techniques used at the studio and explore new variations more amenable to GI.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126249151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building highly parallel character rigs","authors":"Guido Zimmermann, Kevin Ochs, R. Helms","doi":"10.1145/2614106.2614154","DOIUrl":"https://doi.org/10.1145/2614106.2614154","url":null,"abstract":"DreamWorks Animation introduced a new parallel graph system, LibEE [WATT and HAMPTON 2012] as the engine for our next generation in-house animation tool. It became clear that we needed to make changes in how we set up our character rigs for production. The new graph engine has two types of multithreading: first individual nodes are internally multithreaded, second the graph itself can run nodes and groups of nodes in parallel. The second type in particular turns out to give the greatest performance gains for the evaluation of our characters. It is also the part that is determined by the construction of the rig itself. To take full advantage of this new system we needed to restructure our characters by enabling different parts of the character to evaluate in parallel as much as possible. This talk focuses on how we build our character rigs to improve graph performance, including changes to workflows and strategies required by our transition from serial to parallel graph structures. Because our animation software engine is the first in the industry to have a parallelized graph, many of these changes are novel, and some were unexpected.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133883621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Jones, J. Unger, Koki Nagano, Jay Busch, Xueming Yu, H. Peng, O. Alexander, P. Debevec
{"title":"Creating a life-sized automultiscopic Morgan Spurlock for CNNs \"Inside Man\"","authors":"Andrew Jones, J. Unger, Koki Nagano, Jay Busch, Xueming Yu, H. Peng, O. Alexander, P. Debevec","doi":"10.1145/2614106.2614177","DOIUrl":"https://doi.org/10.1145/2614106.2614177","url":null,"abstract":"We present a system for capturing and rendering life-size 3D human subjects on an automultiscopic display. Automultiscopic 3D displays allow a large number of viewers to experience 3D content simultaneously without the hassle of special glasses or head gear. Such displays are ideal for human subjects as they allow for natural personal interactions with 3D cues such as eye-gaze and complex hand gestures. In this talk, we will focus on a case-study where our system was used to digitize television host Morgan Spurlock for his documentary show ”Inside Man” on CNN. Automultiscopic displays work by generating many simultaneous views with highangular density over a wide-field of view. The angular spacing between between views must be small enough that each eye perceives a distinct and different view. As the user moves around the display, the eye smoothly transitions from one view to the next. We generate multiple views using a dense horizontal array of video projectors. As video projectors continue to shrink in size, power consumption, and cost, it is now possible to closely stack hundreds of projectors so that their lenses are almost continuous. However this display presents a new challenge for content acquisition. It would require hundreds of cameras to directly measure every projector ray. We achieve similar quality with a new view interpolation algorithm suitable for dense automultiscopic displays.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123470325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pascal Gautron, M. Droske, Carsten Wächter, Lutz Kettner, A. Keller, Nikolaus Binder, Ken Dahm
{"title":"Path space similarity determined by Fourier histogram descriptors","authors":"Pascal Gautron, M. Droske, Carsten Wächter, Lutz Kettner, A. Keller, Nikolaus Binder, Ken Dahm","doi":"10.1145/2614106.2614117","DOIUrl":"https://doi.org/10.1145/2614106.2614117","url":null,"abstract":"classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2960-6/14/08 Path Space Similarity determined by Fourier Histogram Descriptors","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127194946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Capturing the infinite universe in \"Lucy\": fractal rendering in film production","authors":"Alex Kim, D. P. Ferreira, Stephen Bevins","doi":"10.1145/2614106.2614166","DOIUrl":"https://doi.org/10.1145/2614106.2614166","url":null,"abstract":"There are several existing fractal renderers that can create high quality images, e.g. ultra-fractal, mandelbulber, xenodream, Fractal Explorer. While these renderers focus on computational performance and image quality, they tend to overlook factors like layout and interaction with other elements/simulations as well as pipeline integration, all of which are essential to movie production. The creation of explicit geometry for highly detailed fractals can quickly result in meshes with overwhelming polygon counts, and consequently most production renderers, e.g. Mantra, will run out of memory during the construction of the associated mesh acceleration data structures. To address these shortcomings we propose a novel approach to production rendering of detailed fractals. Our technique, which is based on a hybrid particle/OpenVDB representation, can render fractals with very high details while incurring a small memory footprint. Additionally our approach is easily parallelizable across multiple machines, allows for a quick viewport preview for layout, can utilize existing production lights/shaders, can produce proxy meshes for interactions with simulations, and supports volume rendering.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125698822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Koulieris, G. Drettakis, D. Cunningham, K. Mania
{"title":"High level saliency prediction for smart game balancing","authors":"G. Koulieris, G. Drettakis, D. Cunningham, K. Mania","doi":"10.1145/2614106.2614157","DOIUrl":"https://doi.org/10.1145/2614106.2614157","url":null,"abstract":"Predicting visual attention can significantly improve scene design, interactivity and rendering. For example, image synthesis can be accelerated by reducing computation on non-attended scene regions; attention can also be used to improve LOD. Most previous attention models are based on low-level image features, as it is computationally and conceptually challenging to take into account highlevel factors such as scene context, topology or task. As a result, they often fail to predict saccadic targets because scene semantics strongly affect the planning and execution of fixations. In this talk, we present the first automated high level saliency predictor that incorporates the schema [Bartlett 1932] and singleton [Theeuwes and Godijn 2002] hypotheses into the Differential-Weighting Model (DWM) [Eckstein 1998]. The scene schema effect states that a scene is comprised of objects expected to be found in a specific context as well objects out of context which are salient (Figure 1a). The singleton effect refers to the finding that viewer’s attention is captured by isolated objects (Figure 1b). We propose a new model to account for high-level object saliency as predicted by the schema and singleton hypotheses by extending the DWM. The DWM models attentional processing using physiological noise in brain neurons and Gaussian combination rules. A GPU implementation of our model estimates the probabilities of individual objects to be foveated and is used in an innovative game level editor that automatically suggests game objects’ positioning. The difficulty of a game can then be implicitly adjusted since topology affects object search completion time.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126971584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}