Proceedings of the 2016 Symposium on Digital Production最新文献

筛选
英文 中文
The Jungle Book: art-directing procedural scatters in rich environments 《奇幻森林》:丰富环境中的美术指导程序
Proceedings of the 2016 Symposium on Digital Production Pub Date : 2016-07-23 DOI: 10.1145/2947688.2947692
Stefano Cieri, A. Muraca, A. Schwank, Filippo Preti, Tony Micilotta
{"title":"The Jungle Book: art-directing procedural scatters in rich environments","authors":"Stefano Cieri, A. Muraca, A. Schwank, Filippo Preti, Tony Micilotta","doi":"10.1145/2947688.2947692","DOIUrl":"https://doi.org/10.1145/2947688.2947692","url":null,"abstract":"Disney's live action remake of The Jungle Book required us to build photorealistic, organic and complex jungle environments. We developed a geometry distributor with which artists could dress a large number of very diverse CG sets. It was used in over 800 shots to scatter elements, ranging from debris to entire trees. Per object attributes were configurable and the distribution was driven by procedural shaders and custom maps. This paper describes how the system worked and demonstrates the efficiency and effectiveness of the workflows which originated from it. We will present a number of scenarios where this pipelined, semi-procedural strategy for set dressing was crucial to the creation of high-quality environments.","PeriodicalId":309834,"journal":{"name":"Proceedings of the 2016 Symposium on Digital Production","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127111017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Large scale VFX pipelines 大规模的视觉特效管道
Proceedings of the 2016 Symposium on Digital Production Pub Date : 2016-07-23 DOI: 10.1145/2947688.2947689
A. Wright, M. Chambers, J. Israel, Nick Shore
{"title":"Large scale VFX pipelines","authors":"A. Wright, M. Chambers, J. Israel, Nick Shore","doi":"10.1145/2947688.2947689","DOIUrl":"https://doi.org/10.1145/2947688.2947689","url":null,"abstract":"To ensure peak utilization of hardware resources, as well as handle the increasingly dynamic demands placed on its render farm infrastructure, Weta Digital developed custom queuing, scheduling, job description and submission systems - which work in concert to maximize the available cores across a large range of non-uniform task types. The render farm is one of the most important, high traffic components of a modern VFX pipeline. Beyond the hardware itself a render farm requires careful management and maintenance to ensure it is operating at peak efficiency. In Wetas case this hardware consists of a mix of over 80,000 CPU cores and a number of GPU resources, and as this has grown it has introduced many interesting scalability challenges. In this talk we aim to present our end-to-end solutions in the render farm space, from the structure of the resource and the inherent problems introduced at this scale, through the development of Plow - our management, queuing and monitoring software, and Kenobi - our job description framework. Finally we will detail the deployment process and production benefits realized. Within each section we intend to present the scalability issues encountered, and detail our strategy, process and results in solving these problems. The ever increasing complexity and computational demands of modern VFX drives Wetas need to innovate in all areas, from surfacing, rendering and simulation but also to core pipeline infrastructure.","PeriodicalId":309834,"journal":{"name":"Proceedings of the 2016 Symposium on Digital Production","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122480618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Camera tracking in visual effects an industry perspective of structure from motion 视觉特效中的摄像机跟踪:从运动中观察结构的工业视角
Proceedings of the 2016 Symposium on Digital Production Pub Date : 2016-07-23 DOI: 10.1145/2947688.2947697
Alastair Barber, D. Cosker, Oliver James, Ted Waine, Radhika J. Patel
{"title":"Camera tracking in visual effects an industry perspective of structure from motion","authors":"Alastair Barber, D. Cosker, Oliver James, Ted Waine, Radhika J. Patel","doi":"10.1145/2947688.2947697","DOIUrl":"https://doi.org/10.1145/2947688.2947697","url":null,"abstract":"The 'Matchmove', or camera-tracking process is a crucial task and one of the first to be performed in the visual effects pipeline. An accurate solve for camera movement is imperative and will have an impact on almost every other part of the pipeline downstream. In this work we present a comprehensive analysis of the process at a major visual effects studio, drawing on a large dataset of real shots. We also present guidelines and rules-of-thumb for camera tracking scheduling which are, in what we believe to be an industry first, backed by statistical data drawn from our dataset. We also make available data from our pipeline which shows the amount of time spent on camera tracking and the types of shot that are most common in our work. We hope this will be of interest to the wider computer vision research community and will assist in directing future research.","PeriodicalId":309834,"journal":{"name":"Proceedings of the 2016 Symposium on Digital Production","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132105078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Gaffer: an open-source application framework for VFX Gaffer:一个用于视觉特效的开源应用框架
Proceedings of the 2016 Symposium on Digital Production Pub Date : 2016-07-23 DOI: 10.1145/2947688.2947696
J. Haddon, Andrew Kaufman, D. Minor, D. Dresser, Ivan Imanishi, Paulo Nogueira
{"title":"Gaffer: an open-source application framework for VFX","authors":"J. Haddon, Andrew Kaufman, D. Minor, D. Dresser, Ivan Imanishi, Paulo Nogueira","doi":"10.1145/2947688.2947696","DOIUrl":"https://doi.org/10.1145/2947688.2947696","url":null,"abstract":"Gaffer is an open-source application framework for visual effects production, which includes a multithreaded node based computation framework and a QT based UI framework for editing and viewing node graphs. The Gaffer frameworks were initiated independently by John Haddon in 2007, and have been used and extended in production at Image Engine since they were open-sourced in 2011. They have become vital to nearly the entire Image Engine pipeline, forming the basis of any node-based system we choose to develop.","PeriodicalId":309834,"journal":{"name":"Proceedings of the 2016 Symposium on Digital Production","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122019075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Volumetric clouds in the VR movie, Allumette VR电影《Allumette》中的体积云
Proceedings of the 2016 Symposium on Digital Production Pub Date : 2016-07-23 DOI: 10.1145/2947688.2947699
Devon Penney
{"title":"Volumetric clouds in the VR movie, Allumette","authors":"Devon Penney","doi":"10.1145/2947688.2947699","DOIUrl":"https://doi.org/10.1145/2947688.2947699","url":null,"abstract":"Allumette is an immersive, highly emotional, and visually complex virtual reality movie that takes place in a city floating amongst clouds. The story unfolds around you, and as the viewer, you are free to experience the action from a perspective of your choosing. This means you can move around and view the clouds from all angles, while the set and characters interact intimately with the landscape. This type of set is a formidable challenge for traditional animated films where you have huge resources and hours to render each frame, which makes achieving the look and feel of immersive clouds in VR uncharted territory full of difficult challenges. Existing lightweight techniques for real time clouds, such as using geometric shells with translucency shaders, and sprite-based methods, have a combination of poor quality and bad performance in VR, which led us to seek novel methods to tackle the problem. For Allumette, we first modeled clouds in virtual reality by painting cloud shells using a proprietary modeling tool, then used a third party procedural modeling package to create and light the cloud voxel grids. Finally, these grids were exported with a custom file format, and rendered using a ray marcher in our game engine. The resulting clouds take .6ms per eye to render, and immerse the viewer in our cloud city.","PeriodicalId":309834,"journal":{"name":"Proceedings of the 2016 Symposium on Digital Production","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126413286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
How to build a human: practical physics-based character animation 如何建立一个人:实用的基于物理的角色动画
Proceedings of the 2016 Symposium on Digital Production Pub Date : 2016-07-23 DOI: 10.1145/2947688.2947698
James Jacobs, J. Barbič, Essex Edwards, C. Doran, Andy van Straten
{"title":"How to build a human: practical physics-based character animation","authors":"James Jacobs, J. Barbič, Essex Edwards, C. Doran, Andy van Straten","doi":"10.1145/2947688.2947698","DOIUrl":"https://doi.org/10.1145/2947688.2947698","url":null,"abstract":"We present state-of-the-art character animation techniques for generating realistic anatomical motion of muscles, fat, and skin. Physics-based character animation uses computational resources in lieu of exhaustive artist effort to produce physically realistic images and animations. This principle has already seen widespread adoption in rendering, fluids, and cloth simulation. We believe that the savings in manpower and improved realism of results provided by a physics- and anatomy-based approach to characters cannot be matched by other techniques. Over the past year we have developed a physics-based character toolkit at Ziva Dynamics and used it to create a photo-realistic human character named Adrienne. We give an overview of the workflow used to create Adrienne, from modeling of anatomical bodies to their simulation via the Finite Element Method. We also discuss practical considerations necessary for effective physics-based character animation.","PeriodicalId":309834,"journal":{"name":"Proceedings of the 2016 Symposium on Digital Production","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127985040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Searching for the interesting stuff in a multi-dimensional parameter space 在多维参数空间中寻找有趣的东西
Proceedings of the 2016 Symposium on Digital Production Pub Date : 2016-07-23 DOI: 10.1145/2947688.2947690
Andy Lomas
{"title":"Searching for the interesting stuff in a multi-dimensional parameter space","authors":"Andy Lomas","doi":"10.1145/2947688.2947690","DOIUrl":"https://doi.org/10.1145/2947688.2947690","url":null,"abstract":"This talk describes work that I have been doing using generative systems and the problems this raises with how to deal with multi-dimensional parameter spaces. In particular I am interested in dealing with problems where there are too many parameters to do a simple exhaustive search, only a small number of parameter combinations are likely to achieve interesting results, but the user still wants to retain creative influence. For a number of years I have been exploring how intricate complex structures may be created by simulating growth processes. In early work, such the Aggregation (Lomas 2005) and Flow series, a small number of parameters controlled various effects that could bias the growth. These could be explored by simply varying all the parameters independently and running simulations to test the results. Simple methods such as these work well when there are up to 3 parameters. However, as the number of parameters increase, the task rapidly becomes increasingly complex, and methods that exhaustively sample all the parameters independently are no longer viable. In this talk I will discuss how I have approached this problem for my recent Cellular Forms (Lomas 2014) and Hybrid Forms (Lomas 2015) works which can have more than 30 parameters, any of which could affect the simulation process in complex and unexpected ways. In particular, systems that have the potential for interesting emergent results often exhibit difficult behavior, where most sets of parameter values create uninteresting regularity or chaos. Only at the transition areas between these states are the most interesting complex results found. To help solve these problems I have been developing a tool called 'Species Explorer'. This uses a hybrid approach that combines both evolutionary and lazy machine learning techniques to assist the user find combinations of parameters that may be worth sampling, helping them to explore for novelty as well as to refine particularly promising results.","PeriodicalId":309834,"journal":{"name":"Proceedings of the 2016 Symposium on Digital Production","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127484090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selective and dynamic cloth fold smoothing with collision resolution 选择性和动态布料折叠平滑与碰撞分辨率
Proceedings of the 2016 Symposium on Digital Production Pub Date : 2016-07-23 DOI: 10.1145/2947688.2947691
Arunachalam Somasundaram
{"title":"Selective and dynamic cloth fold smoothing with collision resolution","authors":"Arunachalam Somasundaram","doi":"10.1145/2947688.2947691","DOIUrl":"https://doi.org/10.1145/2947688.2947691","url":null,"abstract":"We present techniques to selectively and dynamically detect and smooth folds in a cloth mesh after simulation. This gives artists the controls to emphasize or de-emphasize certain folds, cleanup simulation errors that can cause crumpled cloth, and resolve cloth-body interpenetrations that can happen during smoothing. These techniques are simple and fast, and help the artist to direct, cleanup, and enrich the look of simulated cloth.","PeriodicalId":309834,"journal":{"name":"Proceedings of the 2016 Symposium on Digital Production","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123692349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Portable real-time character rigs for virtual reality experiences 用于虚拟现实体验的便携式实时角色平台
Proceedings of the 2016 Symposium on Digital Production Pub Date : 2016-07-23 DOI: 10.1145/2947688.2947694
Helge Mathee, Bernhard Haux
{"title":"Portable real-time character rigs for virtual reality experiences","authors":"Helge Mathee, Bernhard Haux","doi":"10.1145/2947688.2947694","DOIUrl":"https://doi.org/10.1145/2947688.2947694","url":null,"abstract":"In this presentation we describe a work-in-progress approach to a portable character animation pipeline for real-time scenarios that can dramatically reduce iteration time and also increase character quality and flexibility. Simply put, it is a What You Rig and Animate (in the DCC app) is What You Get (in the VR experience) approach. Our implementation involves using the python-based Kraken tool to generate a rig that can run in Autodesk Maya® and also a version that can be executed by Fabric Engine within Unreal Engine®. By essentially running the same full rig both in Maya and Unreal, we are able to maintain film-quality characters that keep the same richness and animation control. Portable characters have their rigs defined in a way that allows them to run in any environment while maintaining the full flexibility and functionality of the original control and deformation rig, which in turn allows for artistic intent to be preserved at all stages.","PeriodicalId":309834,"journal":{"name":"Proceedings of the 2016 Symposium on Digital Production","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121819049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creating an actor-specific facial rig from performance capture 从表演捕捉中创建一个特定于演员的面部rig
Proceedings of the 2016 Symposium on Digital Production Pub Date : 2016-07-23 DOI: 10.1145/2947688.2947693
Yeongho Seol, Wan-Chun Ma, J. P. Lewis
{"title":"Creating an actor-specific facial rig from performance capture","authors":"Yeongho Seol, Wan-Chun Ma, J. P. Lewis","doi":"10.1145/2947688.2947693","DOIUrl":"https://doi.org/10.1145/2947688.2947693","url":null,"abstract":"Creating a high quality blendshape rig usually involves a large amount of effort from skilled artists. Although current 3D reconstruction technologies are able to capture accurate facial geometry of the actor, it is still very difficult to build a production-ready blendshape rig from unorganized scans. Removing rigid head motion and separating mixed expressions from the captures are two of the major challenges in this process. We present a technique that creates a facial blendshape rig based on performance capture and a generic face rig. The customized rig accurately captures actor-specific face details while producing a semantically meaningful FACS basis. The resulting rig faithfully serves both artist friendly keyframe animation and high quality facial motion retargeting in production.","PeriodicalId":309834,"journal":{"name":"Proceedings of the 2016 Symposium on Digital Production","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116416690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信