ACM SIGGRAPH 2017 Talks最新文献

筛选
英文 中文
Dance motion analysis and editing using hilbert-huang transform 利用希尔伯特-黄变换分析和编辑舞蹈动作
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085023
Ran Dong, DongSheng Cai, Nobuyoshi Asai
{"title":"Dance motion analysis and editing using hilbert-huang transform","authors":"Ran Dong, DongSheng Cai, Nobuyoshi Asai","doi":"10.1145/3084363.3085023","DOIUrl":"https://doi.org/10.1145/3084363.3085023","url":null,"abstract":"Human motions (especially, dance motions) are very noisy and it is difficult to analyze the motions. To resolve this problem, we propose a new method to decompose and edit the motions using the Hilbert-Huang transform (HHT). The HHT decomposes a chromatic signal into \"monochromatic\" signals that are the so-called Intrinsic Mode Functions (IMFs) using an Empirical Mode Decomposition (EMD)[Huang 2014]. The HHT has the advantage to analyze non-stationary and nonlinear signals like human joint motions over the FFT or Wavelet transform. In the present research, we propose a new framework to analyze a famous Japanese threesome pop singer group \"Perfume\". Then using the NA-MEMD, we decompose dance motions into motion (choreographic) primitives or IMFs, which can be scaled, combined, subtracted, exchanged, and modified self-consistently.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124470184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Headset removal for virtual and mixed reality 移除虚拟现实和混合现实的耳机
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085083
Christian Früh, Avneesh Sud, Vivek Kwatra
{"title":"Headset removal for virtual and mixed reality","authors":"Christian Früh, Avneesh Sud, Vivek Kwatra","doi":"10.1145/3084363.3085083","DOIUrl":"https://doi.org/10.1145/3084363.3085083","url":null,"abstract":"Virtual Reality (VR) has advanced significantly in recent years and allows users to explore novel environments (both real and imaginary), play games, and engage with media in a way that is unprecedentedly immersive. However, compared to physical reality, sharing these experiences is difficult because the user's virtual environment is not easily observable from the outside and the user's face is partly occluded by the VR headset. Mixed Reality (MR) is a medium that alleviates some of this disconnect by sharing the virtual context of a VR user in a flat video format that can be consumed by an audience to get a feel for the user's experience. Even though MR allows audiences to connect actions of the VR user with their virtual environment, empathizing with them is difficult because their face is hidden by the headset. We present a solution to address this problem by virtually removing the headset and revealing the face underneath it using a combination of 3D vision, machine learning and graphics techniques. We have integrated our headset removal approach with Mixed Reality, and demonstrate results on several VR games and experiences.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124647464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Designing look-and-feel using generalized crosshatching 使用广义交叉图设计观感
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085054
Yuxiao Du, E. Akleman
{"title":"Designing look-and-feel using generalized crosshatching","authors":"Yuxiao Du, E. Akleman","doi":"10.1145/3084363.3085054","DOIUrl":"https://doi.org/10.1145/3084363.3085054","url":null,"abstract":"In this work, we have developed an approach to include any cross-hatching technique into any rendering system with global illumination effects (see Figure 1). Our new approach provide a robust computation to obtain hand-drawn effects for a wide variety of diffuse and specular materials. Our contributions can be summarized as follows: (1) A Barycentric shader that can provide generalized cross-hatching with multi-textures; and (2) A texture synthesis method that can automatically produce crosshatching textures from any given image.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116765929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Bringing Lou to life: a study in creating Lou 把Lou带进生活:创造Lou的研究
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085089
Peter Tieryas, Henry Garcia, S. Truman, Evan Bonifacio
{"title":"Bringing Lou to life: a study in creating Lou","authors":"Peter Tieryas, Henry Garcia, S. Truman, Evan Bonifacio","doi":"10.1145/3084363.3085089","DOIUrl":"https://doi.org/10.1145/3084363.3085089","url":null,"abstract":"In Pixar's Lou, a combination of lost and found items comes to life, multiple pieces assembling to create the eponymous character. There were many visual and technical challenges to creating a character that can take on almost any form, using many of the random objects around him to convey emotion and feeling. We've highlighted several of the ways animation, modeling, rigging, simulation, and shading worked in conjunction to develop artistic and technical solutions to make Lou feel as real as the world around him.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125187124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hairy effects in Trolls 巨魔的毛发效果
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085070
Brian Missey, Amaury Aubel, Arunachalam Somasundaram, Megha Davalath
{"title":"Hairy effects in Trolls","authors":"Brian Missey, Amaury Aubel, Arunachalam Somasundaram, Megha Davalath","doi":"10.1145/3084363.3085070","DOIUrl":"https://doi.org/10.1145/3084363.3085070","url":null,"abstract":"Hair plays a feature role in the film Trolls. It is a crucial part of the overall character design of the Trolls themselves, typically composing over half the silhouette of the character. However, the use of hair on the show went well beyond the standard coif and bled into acting beats, traditional effects, environments, and set pieces. This talk presents the wide variety of unique and challenging hair effects in the film and the techniques used to create them.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128063938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automated light probes from capture to render for Peter Rabbit 自动光探针从捕捉到渲染彼得兔
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085087
D. Heckenberg, Steve Agland, Jean-Pascal leBlanc, Raphael Barth
{"title":"Automated light probes from capture to render for Peter Rabbit","authors":"D. Heckenberg, Steve Agland, Jean-Pascal leBlanc, Raphael Barth","doi":"10.1145/3084363.3085087","DOIUrl":"https://doi.org/10.1145/3084363.3085087","url":null,"abstract":"We created an efficient pipeline for automated, HDR light probes for the hybrid live-action / animated feature film Peter Rabbit. A specially developed \"360°\" spherical camera allows on-set acquisition at more positions and in less time than traditional techniques. Reduced capture time, drastically simplified stitching and a custom multiple-exposure raw to HDR process minimizes artefacts in the resulting images. A semi-automated system recovers clipped radiance in direct sunlight using surfaces with known properties. By recording capture location and orientation and combining with other scene data we produce automated rendering setups using the light probes for illumination and projection onto 3d render geometry.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127008953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Rendering the darkness: glimpse on the LEGO Batman movie 渲染黑暗:乐高蝙蝠侠电影的一瞥
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085090
D. Heckenberg, Luke Emrose, Matthew Reid, Michael Balzer, Antoine Roille, Max Liani
{"title":"Rendering the darkness: glimpse on the LEGO Batman movie","authors":"D. Heckenberg, Luke Emrose, Matthew Reid, Michael Balzer, Antoine Roille, Max Liani","doi":"10.1145/3084363.3085090","DOIUrl":"https://doi.org/10.1145/3084363.3085090","url":null,"abstract":"The technical and creative challenges of The LEGO Batman Movie motivated many changes to rendering at Animal Logic. The project was the first feature animation to be entirely rendered with the studio's proprietary path-tracer, Glimpse. Brick-based modelling, animation and destruction techniques taken to the extents of Gotham City required extraordinary scalability and control. The desire to separate complexity from artistic intent led to the development of a novel material composition system. Lensing and lighting choices also drove technical development for efficient in-render lens distortion, depth-of-field effects and accelerated handling of thousands of city and interior lights.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131961223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Director-centric virtual camera production tools for rogue one 以导演为中心的虚拟摄像机制作工具
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085053
Mike Jutan, Stephen Ellis
{"title":"Director-centric virtual camera production tools for rogue one","authors":"Mike Jutan, Stephen Ellis","doi":"10.1145/3084363.3085053","DOIUrl":"https://doi.org/10.1145/3084363.3085053","url":null,"abstract":"For Rogue One: A Star Wars Story, executive producer John Knoll wanted the all-CG shots to feel consistent with the signature handheld camera style that director Gareth Edwards captured on set. To achieve this, the Industrial Light & Magic (ILM) R&D team created a director-centric virtual camera system that encourages open set exploration of the all-CG Star Wars worlds. We enabled the director to achieve his artistic vision via our low footprint, flexible, iteration-based production toolset.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"15 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131452504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Moana: Foundation of a Lava Monster 莫阿娜:熔岩怪物的基础
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085076
Marc Bryant, Ian J. Coony, Jonathan Garcia
{"title":"Moana: Foundation of a Lava Monster","authors":"Marc Bryant, Ian J. Coony, Jonathan Garcia","doi":"10.1145/3084363.3085076","DOIUrl":"https://doi.org/10.1145/3084363.3085076","url":null,"abstract":"For Disney's Moana, the challenges presented by our story's fiery foe, Te Kā, required cross-departmental collaboration and the creation of new pipeline technology. From raging fires and erupting molten lava to churning pyroclastic plumes of steam and smoke, Te Kā was comprised of a large number of layered environmental elements. Effects artists composed heavily art-directed simulations alongside reusable effects assets. This hybrid approach allowed artists to quickly block in and visualize large portions of their shot prior to simulation or rendering. This Foundation Effects (or FFX) workflow became a core strategy for delivering Te Kā's complex effects across multiple sequences.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132653562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Production ready MPM simulations 生产就绪的MPM模拟
ACM SIGGRAPH 2017 Talks Pub Date : 2017-07-30 DOI: 10.1145/3084363.3085066
G. Klár, Jeff Budsberg, Matt Titus, Stephen Jones, K. Museth
{"title":"Production ready MPM simulations","authors":"G. Klár, Jeff Budsberg, Matt Titus, Stephen Jones, K. Museth","doi":"10.1145/3084363.3085066","DOIUrl":"https://doi.org/10.1145/3084363.3085066","url":null,"abstract":"We present two complementary techniques for Material Point Method (MPM) based simulations to improve their performance and to allow for fine-grained artistic control. Our entirely GPU-based solver is able perform up to five times faster than its multithreaded CPU counterpart as a result of our novel particle and grid transfer algorithms. On top of this, we introduce Adaptive Particle Activation, that both makes it possible to simulate only a reduced number of particles, and to give artists means for fine direction over the simulation.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121230361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信