Jared Fong, Jonas Jarvers, Markus Kranzler, Ryan Michero
{"title":"Finding the Look of Souls","authors":"Jared Fong, Jonas Jarvers, Markus Kranzler, Ryan Michero","doi":"10.1145/3388767.3407342","DOIUrl":"https://doi.org/10.1145/3388767.3407342","url":null,"abstract":"Finding the look of the soul characters for the movie Soul was a challenging process. The art direction was based on a ethereal design and turned out to be a moving target during the look development collaboration between the different departments involved. While on a tight schedule, multiple technical approaches were taken in order to find solutions to the difficult design challenges. The final approach features a volumetric shading with a unique line treatment on extreme flexible yet simple and appealing character designs.","PeriodicalId":368810,"journal":{"name":"Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122234575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learned Super Resolution for Feature Film Production","authors":"Vaibhav Vavilala, Mark Meyer","doi":"10.1145/3388767.3407334","DOIUrl":"https://doi.org/10.1145/3388767.3407334","url":null,"abstract":"Upscaling techniques are commonly used to create high resolution images, which are cost-prohibitive or even impossible to produce otherwise. In recent years, deep learning methods have improved the detail and sharpness of upscaled images over traditional algorithms. Here we discuss the motivation and challenges of bringing deep learned super resolution to production at Pixar, where upscaling is useful for reducing render farm costs and delivering high resolution content.","PeriodicalId":368810,"journal":{"name":"Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133592204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"That’s a wrap: Manifold Garden rendering retrospective","authors":"Arthur Brussee, A. Saraev, William Chyr","doi":"10.1145/3388767.3407385","DOIUrl":"https://doi.org/10.1145/3388767.3407385","url":null,"abstract":"1 TOROIDAL GEOMETRY RENDERING The world of Manifold Garden consists of an infinitely repeating world. That is if you fall off the bottom of the world you’ll end up on top again. We relied a sleight of hand for this effect: Duplicate the world in a grid, and teleport the player when they reach the boundary of the centre world instance. This naive approach worked surprisingly well, as long as one takes care to make heavy use of Instancing, and LODs. We created an automatic decimation pipeline that created LODs for further away wrap instances, and kept the art workflow uninterrupted. For dynamic objects, this approach was still too slow; there are n3 instances to update in the world. Instead, for these we relied on a fully GPU driven approach: • Write out transform data for the centre instance to a compute buffer • Frustum cull each instance, in each cell, in a compute shader; append visible instances to a buffer. • Use indirect dispatch to render all instances in one pass. This approach saved us from having to update separate game entities. In the future, this technique could be scaled up and used for everything, but it currently remained with too many limitations regarding sorting and LODs to be used for everything. Another interesting point to consider are shadows. Of course, in a real toroidal space light propagation is quite bizarre! Rather we pretend light is","PeriodicalId":368810,"journal":{"name":"Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124451713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PhysLight: An End-to-End Pipeline for Scene-Referred Lighting","authors":"A. Langlands, Luca Fascione","doi":"10.1145/3388767.3407368","DOIUrl":"https://doi.org/10.1145/3388767.3407368","url":null,"abstract":"We present a visual effects production workflow for using spectral sensitivity data of DSLR and digital cinema cameras to reconstruct the spectral energy distribution of a given live-action scene and perform rendering in physical units. We can then create images that respect the real-world settings of the cinema camera, properly accounting for white balance, exposure, and the characteristics of the sensor.","PeriodicalId":368810,"journal":{"name":"Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130139453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Velocity-based compression of 3D rotation, translation, and scale animations for AAA video games","authors":"D. Goodhue","doi":"10.1145/3388767.3407392","DOIUrl":"https://doi.org/10.1145/3388767.3407392","url":null,"abstract":"In our previous publication [Goodhue 2017], we presented a prototype based on promising new techniques for how the state-of-the-art in animation compression for video game engines might be advanced. Later that year, we completed development on a production-quality version of that technology which has since seen active use in the ongoing production of future AAA titles. Many of our previous hypotheses were put to the test, and the algorithms were generalized to support translation and scale animations keys in addition to rotations. Having used this new technology for quite some time now, we were able to confirm our expectations regarding the sort of technical problems it presents, as well as how to solve them. We are also able to compare our results to other state-of-the-art techniques for the first time, thus confirming the efficacy of our method.","PeriodicalId":368810,"journal":{"name":"Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124498804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is It Acid or Is It Fire?: How to Train Your Dragon: The Hidden World","authors":"Amaury Aubel, K. C. Ong","doi":"10.1145/3388767.3407356","DOIUrl":"https://doi.org/10.1145/3388767.3407356","url":null,"abstract":"The animated movie How to Train Your Dragon: The Hidden World introduces a new species of dragon in the franchise: the Deathgripper. This dragon possesses the ability to spit green acid that both dissolves and sets ablaze objects that it touches. In this talk we present the various challenges posed by this somewhat unique effect from the visual development phase to production shots.","PeriodicalId":368810,"journal":{"name":"Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124981351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Ranking with Density Trees for Google Maps","authors":"Jared Johnson, Sema Berkiten","doi":"10.1145/3388767.3407353","DOIUrl":"https://doi.org/10.1145/3388767.3407353","url":null,"abstract":"We propose an unsupervised learning technique for image ranking of photos contributed by Google Maps users. A density tree is built for each point-of-interest (POI), such as The National Mall or the Louvre. This tree is used to construct clusters, which are then ranked based on size and quality. We choose a representative image for each cluster, resulting in a ranked set of high-quality, diverse, and relevant images for each POI. We validated our algorithm in a side-by-side preference study.","PeriodicalId":368810,"journal":{"name":"Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122353353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Joseph, Vijoy Gaddipati, Benjamin Fiske, Marie Tollec, Tad Miller
{"title":"Frozen 2: Effects Vegetation Pipeline","authors":"N. Joseph, Vijoy Gaddipati, Benjamin Fiske, Marie Tollec, Tad Miller","doi":"10.1145/3388767.3409320","DOIUrl":"https://doi.org/10.1145/3388767.3409320","url":null,"abstract":"Walt Disney Animation Studios’ ”Frozen 2” takes place in the Enchanted Forest, which is full of vegetation (e.g. distinctive leaves and foliage) that is manipulated by other characters, including the wind character, Gale. ”Frozen 2” also has multiple scenes where a large portion of the forest is on fire. The quantity and scale of vegetation effects in ”Frozen 2” presented a challenge to our Effects department. We developed two workflows, the Vegetation Asset workflow and the Fire Tree workflow, to help us achieve high quality artistic performance of procedural tree animation and fire tree simulations on ”Frozen 2”. Using the new workflows we not only saw an order of magnitude improvement in the work efficiency of our Effects artists, but also saw an increase in work satisfaction and overall artistic quality since the workflows handled the data management of various assets in the shot, allowing artists to concentrate more on their craft.","PeriodicalId":368810,"journal":{"name":"Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126589230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Animation of Togo: Achieving hyper realism for 11 CG dogs","authors":"Arna Diego, Leonardo Bonisolli","doi":"10.1145/3388767.3408944","DOIUrl":"https://doi.org/10.1145/3388767.3408944","url":null,"abstract":"Disney+’s ‘Togo’ is a testament to the critical creative partnership between DNEG's Build, Rigging and Animation departments, in the pursuit of a realistic CG dog. This talk will explore the intricacies of creating photorealistic dogs from ideation to finish, demonstrating the process from an Animation standpoint, while addressing the collaborative nature of the project.","PeriodicalId":368810,"journal":{"name":"Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126360526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented and Virtual Reality Application Design for Immersive Learning Research Using Virtual Nature: Making Knowledge Beautiful and Accessible with Information Fidelity","authors":"Maria C. R. Harrington","doi":"10.1145/3388767.3407318","DOIUrl":"https://doi.org/10.1145/3388767.3407318","url":null,"abstract":"Described are two applications using immersive augmented reality (AR) and virtual reality (VR) for informal learning research. A critical design factor resulting from the authentication process in sourcing all text, media, and data is the high information fidelity (truth) in all signals transmitted to the human. The AR Perpetual Garden App was developed to annotate the Carnegie Museum of Natural History's dioramas and gardens to bring learning to all visitors. The Virtual UCF Arboretum was developed to represent the real UCF Arboretum in VR for immersive learning research. More like a virtual diorama or virtual field trip, they are open to independent exploration and learning. Unlike fantasy games or creative animations, these environments used accurate content, high information fidelity, to enhance immersion and presence. As data visualizations or simulations, and not point-clouds or interactive 360 VR video, they can show past, present, and future scenarios from data. As an application intended for informal learning, the needs of learners as well as the institutional stakeholders were integrated in a participatory design process by extending traditional user-centered design with expert-learner-user-centered design. The design patterns will be of interest to a broad community concerned with perception, emotions, learning, immersion and presence, and any who are developing educational, training and certification, or decision support applications with respect to improving natural knowledge.","PeriodicalId":368810,"journal":{"name":"Special Interest Group on Computer Graphics and Interactive Techniques Conference Talks","volume":"16 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120856581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}