{"title":"Gears of War 4: creating a layered material system for 60fps","authors":"Colin Penty, Ian Wong","doi":"10.1145/3084363.3085026","DOIUrl":"https://doi.org/10.1145/3084363.3085026","url":null,"abstract":"We have created a new material system for Gears of War 4 inside Unreal Engine that allows artists to layer dozens of materials with complete material tuning control and flexibility - then cook out the results in-engine for efficient run-time performance.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131248158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flexible pipeline for crowd production","authors":"Mungo Pay, D. Maupu, M. Prazák","doi":"10.1145/3084363.3085042","DOIUrl":"https://doi.org/10.1145/3084363.3085042","url":null,"abstract":"The complexity of crowd shots can vary greatly, from simple vignetting tasks that add life to an environment, to large and complex battle sequences involving thousands of characters. For this reason, a \"one size fits all\" crowd solution might not be optimal, both in terms of design and usability, but also allocation of crew. In this talk we present a suite of tools, developed across multiple platforms, each optimised for specific crowd tasks. These are underpinned by a data interchange library to allow for modification at any stage of the pipeline.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130343537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DreamWorks fabric shading model: from artist friendly to physically plausible","authors":"Priyamvad Deshmukh, Feng Xie, Eric Tabellion","doi":"10.1145/3084363.3085024","DOIUrl":"https://doi.org/10.1145/3084363.3085024","url":null,"abstract":"Since Shrek 2, DreamWorks artists have used the fabric model developed by [Glumac and Doepp 2004] extensively on cloth material shading. Even after we developed the physically based microcylinderical cloth model by [Sadeghi et al. 2013], they continued to prefer the intuitive control of the DreamWorks fabric shading model, which is also a cyindrical shading model, with easy to use artistic controls for highlights, and highlight directions.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124722157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optical flow-based face tracking in The Mummy","authors":"Curtis Andrus, Endre Balint, Chong Deng, S. Coupe","doi":"10.1145/3084363.3085048","DOIUrl":"https://doi.org/10.1145/3084363.3085048","url":null,"abstract":"In The Mummy, much of MPC's work involved augmenting the Ahmanet character with various CG elements. This includes, eye splitting, runes, rotten/torn skin, etc. See Figure 1 for an example. These elements needed to be added on top of a live performance, so tracking a 3D model to Ahamanet's face was necessary. Doing this sort of work isn't uncommon, but with the high volume of shots MPC did for this show, it was clear that some new tools would be necessary to help simplify this process.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123915399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The eyes have it: comprehensive eye control for animated characters","authors":"Pilar Molina Lopez, J. Richards","doi":"10.1145/3084363.3085061","DOIUrl":"https://doi.org/10.1145/3084363.3085061","url":null,"abstract":"Eyes are often the most important feature in a character's performance, conveying emotion, timing and intention as well as hints about what comes next in the story. Stories are driven by characters and audience investment comes from their empathy for those characters. Unless the viewer is making a concerted effort to look elsewhere on screen, they usually concentrate on the eyes of the main character. Therefore, a great amount of effort and time is spent making the eyes of our characters look as expressive as possible. When done improperly, the eyes will make a character look dead and unappealing. Our technology utilized to create our characters' eyes gives artists the flexibility to push the boundaries of their craft; it helps them portray characters that communicate the emotions that a story requires. In order to achieve this we designed a set of techniques that compose our eye pipeline. It has been refined over many years in a continued effort and collaboration among several departments at our studio, from modeling to lighting passing through animation and rigging. It allows animators to follow their expressive style while also providing materials artists and lighters with the necessary input to achieve a realistic look.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128272949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Muscle simulation for facial animation in Kong: Skull Island","authors":"Matthew Cong, Lana Lan, Ronald Fedkiw","doi":"10.1145/3084363.3085040","DOIUrl":"https://doi.org/10.1145/3084363.3085040","url":null,"abstract":"For Kong: Skull Island, Industrial Light & Magic created an anatomically motivated facial simulation model for Kong that includes the facial skeleton and musculature. We applied a muscle simulation framework that allowed us to target facial shapes while maintaining desirable physical properties to ensure that the simulations stayed on-model. This allowed muscle simulations to be used as a powerful tool for adding physical detail to and improving the anatomical validity of both blendshapes and blendshape animations in order to achieve more realistic facial animation with less hand sculpting.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128683480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new contour method for highly detailed geometry","authors":"A. Bauer","doi":"10.1145/3084363.3085052","DOIUrl":"https://doi.org/10.1145/3084363.3085052","url":null,"abstract":"Ray-traced contours are inherently challenged by highly detailed geometry, as commonly found in organic shapes. Existing contour methods cannot reflect such complexity in an artistically pleasing way (Figure 1 A), and animations are prone to flicker. After a brief explanation of contour generation and its inherent challenges, this talk presents a novel approach to rendering aesthetic and flicker-free contours on highly detailed geometry (Figure 1 B). The new method employs sub-pixel-level sub-sampling to achieve a high level of detail quality, and supports contours in transparency, reflection and refraction. The implementation uses mental ray (Unified Sampling mode) but could be realized in other ray tracing renderers as well.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121148376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Circular separable convolution depth of field","authors":"K. Garcia","doi":"10.1145/3084363.3085022","DOIUrl":"https://doi.org/10.1145/3084363.3085022","url":null,"abstract":"Circular Separable Convolution Depth of Field (CSC DoF) is a mathematical adaptation and implementation of a separable circular filter, which utilizes complex plane phasors to create very accurate and fast bokeh. At its core, this technique convolves a circular pattern blur in the frequency domain using a horizontal and a vertical pass, representing the frame buffer using complex numbers. This technique renders at magnitudes faster than brute-force and sprite-based approaches, since it is a separable convolution. Important properties of this technique include convolution separability, low memory bandwidth and large radii circles. The technique has been shipped on Madden NFL 15, Madden NFL 16, Madden NFL 17, Fifa 17 and PGA Tour Rory McIlroy. The implementation includes an offline shader code generation step containing pre-computed frequency domain filters, multiple weighted passes for imaginary and real number processing. We will present the mathematical derivation and some caveats to achieve the required precision for intermediate frequency domain frame buffers.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121314651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gears of War 4: custom high-end graphics features and performance techniques","authors":"J. Malmros","doi":"10.1145/3084363.3085162","DOIUrl":"https://doi.org/10.1145/3084363.3085162","url":null,"abstract":"In this technical post mortem we talk about how The Coalition implemented new custom graphics features and optimized the rendering technology to achieve the performance needed for the high visual bar for Gears Of War 41.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"2 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127963128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hannes Ricklefs, Stefan Puschendorf, S. Bhamidipati, Brian Eriksson, Akshay Pushparaja
{"title":"From VFX project management to predictive forecasting","authors":"Hannes Ricklefs, Stefan Puschendorf, S. Bhamidipati, Brian Eriksson, Akshay Pushparaja","doi":"10.1145/3084363.3085036","DOIUrl":"https://doi.org/10.1145/3084363.3085036","url":null,"abstract":"VFX production companies are currently challenged by the increasing complexity of visual effects shots combined with constant schedule demands. The ability to execute in an efficient and cost-effective manner requires extensive coordination between different sites, different departments, and different artists. This coordination demands data-intensive analysis of VFX workflows beyond standard project management practices and existing tools. In this paper, we propose a novel solution centered around a general evaluation data model and APIs that convert production data (job/scene/shot/schedule/task) to business intelligence insights enabling performance analytics and generation of data summarization for process controlling. These analytics provide an impact measuring framework for analyzing performance over time, with the introduction of new production technologies, and across separate jobs. Finally, we show how the historical production data can be used to create predictive analytics for the accurate forecasting of future VFX production process performance.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127651705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}