ACM SIGGRAPH 2017 Talks最新文献

筛选
英文 中文
Large scale VFX pipelines 大规模的视觉特效管道
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085021
M. Chambers, J. Israel, A. Wright
{"title":"Large scale VFX pipelines","authors":"M. Chambers, J. Israel, A. Wright","doi":"10.1145/3084363.3085021","DOIUrl":"https://doi.org/10.1145/3084363.3085021","url":null,"abstract":"To ensure peak utilization of hardware resources, as well as handle the increasingly dynamic demands placed on its render farm infrastructure, WETA Digital developed custom queuing, scheduling, job description and submission systems - which work in concert to maximize the available cores across a large range of non-uniform task types. The render farm is one of the most important, high traffic components of a modern VFX pipeline. Beyond the hardware itself a render farm requires careful management and maintenance to ensure it is operating at peak efficiency. In WETAs case this hardware consists of a mix of over 80,000 CPU cores and a number of GPU resources, and as this has grown it has introduced many interesting scalability challenges. In this talk we aim to present our end-to-end solutions in the render farm space, from the structure of the resource and the inherent problems introduced at this scale, through the development of Plow - our management, queuing and monitoring software. Finally we will detail the deployment process and production benefits realized. Within each section we intend to present the scalability issues encountered, and detail our strategy, process and results in solving these problems. The ever increasing complexity and computational demands of modern VFX drives WETAs need to innovate in all areas, from surfacing, rendering and simulation but also to core pipeline infrastructure.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133076450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FurCollide: fast, robust, and controllable fur collisions with meshes furcollision:快速、稳健、可控的网格毛皮碰撞
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085051
Arunachalam Somasundaram
{"title":"FurCollide: fast, robust, and controllable fur collisions with meshes","authors":"Arunachalam Somasundaram","doi":"10.1145/3084363.3085051","DOIUrl":"https://doi.org/10.1145/3084363.3085051","url":null,"abstract":"We present FurCollide, a fast, robust, and artist friendly tool used for collision detection and collision resolution of fur curves with meshes. The tool helps artists interact with and control tens of thousands of curves with ease while providing high fidelity realistic and/or artistic collision results. This tool is in use at DreamWorks Animation and has been used in a wide variety of fur and/or grass collision situations in various films.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124469822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Precomputed multiple scattering for light simulation in participating medium 参与介质中光模拟的预计算多重散射
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085037
Beibei Wang, Nicolas Holzschuch
{"title":"Precomputed multiple scattering for light simulation in participating medium","authors":"Beibei Wang, Nicolas Holzschuch","doi":"10.1145/3084363.3085037","DOIUrl":"https://doi.org/10.1145/3084363.3085037","url":null,"abstract":"Illumination simulation involving participating media is computationally intensive. The overall aspect of the material depends on simulating a large number of scattering events inside the material. Combined, the contributions of these scattering events are a smooth illumination. Computing them using ray-tracing or photon-mapping algorithms is expensive: convergence time is high, and pictures before convergence are low quality (see Figure 1). In this paper, we precompute the result of multiple scattering events, assuming an infinite medium, and store it in two 4D tables. These precomputed tables can be used with many rendering algorithms, such as Virtual Ray Lights (VRL), Unified Point Beams and Paths (UPBP) or Manifold Exploration Metropolis Light Transport (MEMLT), greatly reducing the convergence time. The original algorithm takes care of low order scattering (single and double scattering), while our precomputations are used for multiple scattering (more than two scattering events).","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128683403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Evolution of AR in Pokémon go 《pokemon go》中AR的进化
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3107958
C. Kramer
{"title":"Evolution of AR in Pokémon go","authors":"C. Kramer","doi":"10.1145/3084363.3107958","DOIUrl":"https://doi.org/10.1145/3084363.3107958","url":null,"abstract":"","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"360 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115901064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The making of Google earth VR 谷歌地球VR的制作
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085094
Dominik P. Käser, Evan Parker, A. Glazier, Mike Podwal, Matthew Seegmiller, Chun-Po Wang, Per Karlsson, N. Ashkenazi, Joanna Kim, Andre Le, Matthias Bühlmann, Joshua Moshier
{"title":"The making of Google earth VR","authors":"Dominik P. Käser, Evan Parker, A. Glazier, Mike Podwal, Matthew Seegmiller, Chun-Po Wang, Per Karlsson, N. Ashkenazi, Joanna Kim, Andre Le, Matthias Bühlmann, Joshua Moshier","doi":"10.1145/3084363.3085094","DOIUrl":"https://doi.org/10.1145/3084363.3085094","url":null,"abstract":"One of the great promises of virtual reality is that it can allow people to visit places in the world that they might otherwise be unable to. Since the recent renaissance of virtual reality, content creators have exercised various techniques such as 360-degree cameras and photogrammetry to make this promise come true. At Google, we spent more than 10 years capturing every part of the world as part of the Google Earth project. The result is a rich 3D mesh that contains trillions of triangles [Kontkanen and Parker 2014] and as such is predestined to be a good data source for VR content. In [Kaeser and Buehlmann 2016] we discussed some of our early experiments with bringing Google Earth to virtual reality, but without a focus on developing a product. Following these experiments, we worked extensively to create a well-rounded product, Google Earth VR, which we eventually launched to the world in November 2016. Google Earth VR quickly became one of the most actively used VR applications in the market and has won several awards since. This talk discusses the journey of the Google Earth VR project from its early prototypes to its final launched stage.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131788617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Field trip to Mars 火星实地考察
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085074
A. Rowan-Robinson
{"title":"Field trip to Mars","authors":"A. Rowan-Robinson","doi":"10.1145/3084363.3085074","DOIUrl":"https://doi.org/10.1145/3084363.3085074","url":null,"abstract":"\"Field Trip to Mars\" is the first-ever headset-free group virtual reality vehicle experience. Taking the literal shape of a classic yellow school bus, the vehicle is home to an immersive virtual experience that transports school children to the surface of the Red Planet.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134544057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing VR for all users through adaptive focus displays 通过自适应焦点显示,为所有用户优化VR
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085029
Nitish Padmanaban, Robert Konrad, Emily A. Cooper, Gordon Wetzstein
{"title":"Optimizing VR for all users through adaptive focus displays","authors":"Nitish Padmanaban, Robert Konrad, Emily A. Cooper, Gordon Wetzstein","doi":"10.1145/3084363.3085029","DOIUrl":"https://doi.org/10.1145/3084363.3085029","url":null,"abstract":"Personal computing devices have evolved steadily, from desktops to mobile devices, and now to emerging trends in wearable computing. Wearables are expected to be integral to consumer electronics, with the primary mode of interaction often being a near-eye display. However, current-generation near-eye displays are unable to provide fully natural focus cues for all users, which often leads to discomfort. This core limitation is due to the optics of the systems themselves, with current displays being unable to change focus as required by natural vision. Furthermore, the form factor often makes it difficult for users to wear corrective eyewear. With two prototype near-eye displays, we address these issues using display modes that adapt to the user via computational optics. These prototypes make use of focus-tunable lenses, mechanically actuated displays, and gaze tracking technology to correct common refractive errors per user, and provide natural focus cues by dynamically updating scene depth based on where a user looks. Recent advances in computational optics hint at a future in which some users experience better vision in the virtual world than in the real one.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132036577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A case study on raytracing-in-the-loop optimization: focal surface displays 射线跟踪环内优化的案例研究:焦面显示
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085071
N. Matsuda, Alexander Fix, Douglas Lanman
{"title":"A case study on raytracing-in-the-loop optimization: focal surface displays","authors":"N. Matsuda, Alexander Fix, Douglas Lanman","doi":"10.1145/3084363.3085071","DOIUrl":"https://doi.org/10.1145/3084363.3085071","url":null,"abstract":"Optimization-based design of optical systems can yield configurations that would be impractical to achieve with manual parameter adjustment. Nonetheless, most approaches are geared toward one-time, offline generation of static configurations to be fabricated physically. Recently, challenging computational imaging problems, such as seeing around corners or through scattering media, have utilized dynamically addressable optical elements to probe scene light transport. A new class of optimization techniques targeted at these dynamic applications has emerged in which stochastic raytracing replaces the fixed operators applied with conventional optimization methods. By modeling optical systems as raytracing operators, more complex non-linear phenomena and larger problem sizes can be considered. We introduce a simple raytracing-in-the-loop optimization model for a head-mounted display (HMD) containing a spatial light modulator (SLM). Using this approach, we are able to compute color images to be displayed in concert with spatially varying SLM phase maps at a resolution that would otherwise be computationally in-feasible. We also consider extensions of this model that may further enhance the performance of the target system.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114080564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Moana: geometry based disco ball lighting for tamatoa's lair 海洋奇缘:为塔玛塔的巢穴设计基于几何的迪斯科球照明
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085058
D. Byun, Shant Ergenian, Gregory Culp
{"title":"Moana: geometry based disco ball lighting for tamatoa's lair","authors":"D. Byun, Shant Ergenian, Gregory Culp","doi":"10.1145/3084363.3085058","DOIUrl":"https://doi.org/10.1145/3084363.3085058","url":null,"abstract":"In the \"Lair of Tamatoa\" sequence of our latest movie Moana, we had 56 disco ball lighting effects shots. Our effects and lighting departments collaborated closely to create the bizarre and ludicrous environment of the scene. We developed a geometry-based lighting pipeline which allowed us to interactively design the light effects..","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"395 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132796181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling vellus facial hair from asperity scattering silhouettes 从粗糙的散射轮廓建模牛皮面部毛发
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085057
Chloe LeGendre, Loc Huynh, Shanhe Wang, P. Debevec
{"title":"Modeling vellus facial hair from asperity scattering silhouettes","authors":"Chloe LeGendre, Loc Huynh, Shanhe Wang, P. Debevec","doi":"10.1145/3084363.3085057","DOIUrl":"https://doi.org/10.1145/3084363.3085057","url":null,"abstract":"We present a technique for modeling the vellus hair over the face based on observations of asperity scattering along a subject's silhouette. We photograph the backlit subject in profile and three-quarters views with a high-resolution DSLR camera to observe the vellus hair on the side and front of the face and separately acquire a 3D scan of the face geometry and texture. We render a library of backlit vellus hair patch samples with different geometric parameters such as density, orientation, and curvature, and we compute image statistics for each set of parameters. We trace the silhouette contour in each face image and straighten the backlit hair silhouettes using image resampling. We compute image statistics for each section of the facial silhouette and determine which set of hair modeling parameters best matches the statistics. We then generate a complete set of vellus hairs for the face by interpolating and extrapolating the matched parameters over the skin. We add the modeled vellus hairs to the 3D facial scan and generate renderings under novel lighting conditions, generally matching the appearance of real photographs.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116089994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信