{"title":"Developing interactive facial rigs in production environment","authors":"Jaewoo Seo, J. P. Lewis","doi":"10.1145/2614106.2614178","DOIUrl":"https://doi.org/10.1145/2614106.2614178","url":null,"abstract":"classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2960-6/14/08 Developing Interactive Facial Rigs in Production Environment","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121369837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Alternative strategies for runtime facial motion capture","authors":"Izmeth Siddeek","doi":"10.1145/2614106.2614139","DOIUrl":"https://doi.org/10.1145/2614106.2614139","url":null,"abstract":"classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Facial motion capture has been hitherto, an effective albeit costly means of delivering performances for game characters. Using Kinect hardware we consider a pipeline for delivering game ready performances in Unity, enlisting the talents of actors and game developers. We begin with a review of the following data acquisition pipeline as a basis for motion capture: This data acquisition process informs a pipeline based on the concept of skeletal retargeting whereby the motion capture data stream may be mapped back to a common joint based skeletal system thus rendering it scalable and friendly for implementation into the game development pipeline. In this presentation we hope to take a look at the facial motion capture data set and its applicability to varied characters. Aside from the technical challenges of creating assets for facial motion capture, there is however, the problem of credibly reproducing performances of subjects. As the closer one moves towards realism, the harder it becomes to create an empathic human face. With this in mind we address the phenomenon commonly described as the \" uncanny valley \" in relation to motion captured facial performances and attempt to define the limits of the technology. 3 Conclusion Accessible runtime facial motion capture is an area of growing interest. The advent of the Kinect as a PC peripheral and Mi-crosoft \" s Kinect Fusion Project give us a glimpse into the possibilities afforded by this nascent technology. The implications of such technology are wide-ranging and ultimately offer the prospect of revolutionizing interactive entertainment. Sequence of expressions with corresponding facial reference.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115318992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sylvain Degrotte, Christopher Lawrence, Juan-Luis Sanchez, Russell Lloyd
{"title":"Gravity: simulation as a multi-stage production tool","authors":"Sylvain Degrotte, Christopher Lawrence, Juan-Luis Sanchez, Russell Lloyd","doi":"10.1145/2614106.2614126","DOIUrl":"https://doi.org/10.1145/2614106.2614126","url":null,"abstract":"classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2960-6/14/08 Gravity : Simulation as a Multi-Stage Production Tool","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121464411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monitoring data access patterns in large-scale rendering","authors":"M. Hills, Jim Vanns","doi":"10.1145/2614106.2614111","DOIUrl":"https://doi.org/10.1145/2614106.2614111","url":null,"abstract":"Framestore's production of Gravity placed unprecedented demands on its storage hardware. A great deal of attention had already been spent on data management---monitoring the content of file servers. But storage hardware also has limited capacity to transfer data to and from its client machines and users; excessive demands causing 'brown outs' can substantially impact CG production and bring it to a halt. We present our work to monitor the data access demands placed on file servers. Our system is not tied to any specific server or client software, covering both batch renderfarm processing, and interactive use by users. The resulting data can be used to make immediate decisions in CG production, as well as influence long-term system & CG software design.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114952571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Progressive streaming of compressed 3D graphics in a web browser","authors":"G. Lavoué, L. Chevalier, F. Dupont","doi":"10.1145/2614106.2614170","DOIUrl":"https://doi.org/10.1145/2614106.2614170","url":null,"abstract":"The introduction of the WebGL API for rendering 3D graphics within the browser has boosted the development of 3D Web applications. However, delivering 3D Web content without latency remains a challenging issue, not yet solved. In this context, we introduce a solution for fast progressive streaming and visualization of compressed 3D graphics on the Web. Our approach is based on two main features: (1) a dedicated progressive compression algorithm especially suited to Web-based environments. It produces a compact binary compressed format which allows very fast transmission as well as progressive decoding with levels of details. (2) a plugin-free solution for streaming, decoding and visualization by the Web browser, which relies on an optimized parallel JavaScript/WebGL implementation. Our system allows instantaneous interactive visualization by providing a good approximation of the 3D models in a few milliseconds even for huge data and low-bandwidth channels. Experiments and comparison with concurrent solutions for 3D web content delivery demonstrate its excellent results in terms of latency, adaptability and quality of user experience.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121063996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Large scale simulation and surfacing of water and ice effects in Dragons 2","authors":"B. V. Opstal, L. Janin, K. Museth, M. Aldén","doi":"10.1145/2614106.2614156","DOIUrl":"https://doi.org/10.1145/2614106.2614156","url":null,"abstract":"“How to Train Your Dragon 2” introduces new creatures of truly massive scale, e.g. the Bewilderbeest shown above which measures approximately 600 ft from head to tail. This imposed unique challenges for the FX department when these creatures interacts with environments like water or when they use their special ability to breath ice. While we could leverage somewhat on existing tools developed in previous productions, e.g. [Budsberg et al. 2013; Museth 2013a], it was soon clear that additional steps had to be taken to address the unprecedented scale of both the fluid simulations and the subsequent surfacing of the resulting animated particles.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128030269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ola Olsson, Erik Sintorn, Viktor Kämpe, M. Billeter, Ulf Assarsson
{"title":"Implementing efficient virtual shadow maps for many lights","authors":"Ola Olsson, Erik Sintorn, Viktor Kämpe, M. Billeter, Ulf Assarsson","doi":"10.1145/2614106.2614202","DOIUrl":"https://doi.org/10.1145/2614106.2614202","url":null,"abstract":"In the past few years, several techniques have been presented that enable real-time shading using many hundreds or thousands of lights [Harada et al. 2013]. However, only recently has a comprehensive study including shadows been presented by Olsson et al. [2014], where real-time performance is achieved for several hundred light sources with high quality and controllable memory footprint. The new algorithm uses many modern features of OpenGL and contains many design choices only described very briefly in the paper. We present additional details and focus on the practical implementation aspects of the system, in order to facilitate the implementation of the algorithm for the game development community.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131133577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OpenVL: a developer-level abstraction of computer vision","authors":"G. Miller, S. Fels","doi":"10.1145/2614106.2614206","DOIUrl":"https://doi.org/10.1145/2614106.2614206","url":null,"abstract":"classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2960-6/14/08 OpenVL: A Developer-Level Abstraction of Computer Vision","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127083751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Live real-time animation leveraging machine learning and game engine technology","authors":"Charles Piña, Emiliano Gambaretto, S. Corazza","doi":"10.1145/2614106.2614167","DOIUrl":"https://doi.org/10.1145/2614106.2614167","url":null,"abstract":"classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2960-6/14/08 Live real-time animation leveraging machine learning and game engine technology","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"18 14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126248929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A continuum model for simulating crowd turbulence","authors":"Abhinav Golas, Rahul Narain, M. Lin","doi":"10.1145/2614106.2614163","DOIUrl":"https://doi.org/10.1145/2614106.2614163","url":null,"abstract":"With increasing world population, we are observing denser and denser crowds in public places. This has led to an increasing incidence of crowd disasters at high densities, known collectively as crowd turbulence [Helbing et al. 2007]. There is an urgent need to understand and simulate such crowds in order to facilitate emergency response, as well as prediction and planning to prevent such emergencies. Simulated crowd turbulence can also be used to augment the fidelity of virtual environments in computer games and movies. In addition, for real-time prediction and response, interactive simulation is an essential requirement.","PeriodicalId":118349,"journal":{"name":"ACM SIGGRAPH 2014 Talks","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129157314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}