{"title":"Physics-aided editing of simulation-ready muscles for visual effects","authors":"F. Turchet, M. Romeo, O. Fryazinov","doi":"10.1145/2945078.2945158","DOIUrl":"https://doi.org/10.1145/2945078.2945158","url":null,"abstract":"Recent developments in character rigging and animation shape the computer graphics industry in general and visual effects in particular. Advances in deformation techniques, which include linear blend skinning, dual quaternion skinning and shape interpolation, meet with sophisticated muscle and skin simulations to produce more realistic results. Effects such as skin sliding, wrinkling and contact of subcutaneous fat and muscles become possible when simulating the anatomy of human-like characters as well as creatures in feature films. One of the main techniques adopted nowadays in the industry is the Finite Element Method (FEM) for deformable objects. Despite the life-like results, the setup cost to generate and tweak volumetric anatomical models for a FEM solver is not only very high, but it cannot easily guarantee the quality of the models either, in terms of simulation requirements. In a production environment in fact (see Fig. 1), models often require additional processing in order to be ready for FEM simulations. For example, self-intersections or interpenetrations in rest pose may result in unwanted forces from the collision detection and response algorithms that affect negatively the simulation at its start.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114795008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caio Brito, Gutenberg Barros, W. Correia, V. Teichrieb, J. M. Teixeira
{"title":"Multimodal augmentation of surfaces using conductive 3D printing","authors":"Caio Brito, Gutenberg Barros, W. Correia, V. Teichrieb, J. M. Teixeira","doi":"10.1145/2945078.2945093","DOIUrl":"https://doi.org/10.1145/2945078.2945093","url":null,"abstract":"Accessible tactile pictures (ATPs) consist of tactile representations that convey different kinds of messages and present information through the sense of touch. Traditional approaches use contours and patterns, which create a distinct and recognizable shape and enables separate objects to be identified. The success rate for recognizing pictures by touch is much lower than it would be for vision. Besides that, some pictures are more frequently recognized than others. Finally, there is also some variation from individual to individual: while some blind people recognize many images, others recognize few. Auditory support can improve the points listed before, even eliminating the need for sighted assistance.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114846450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Immersive paleoart: reconstructing dreadnoughtus schrani and remediating the science documentary for cinematic virtual reality","authors":"V. Feldman","doi":"10.1145/2945078.2945160","DOIUrl":"https://doi.org/10.1145/2945078.2945160","url":null,"abstract":"This project is a synthesis of digital paleoart reconstruction, prototype VR pipeline design, and the remediation of structural narrative principles for immersive media. We approach common issues associated with the accurate portrayal of dinosaurs in media, Cinematic Virtual Reality (CVR) production, and the direction of viewer attention in immersive digital environments. After developing and testing a stable CVR workflow, we designed and produced a piece of scientific VR Paleoart content intended for educational outreach. Our production methods include a state-of-the-art CGI dinosaur reconstruction informed by comparative anatomy and biomechanical simulation, stereoscopic spherical rendering, and photographic CVR film production. Our approach is validated through the completion of a CVR documentary about the titanosaur Dreadnoughtus schrani, one of the largest dinosaurs yet discovered. This documentary, starring paleontologist Dr. Ken Lacovara, will be made publicly available for all common VR distribution platforms. Our goal is to make scientific CVR content accessible to an audience of mobile device owners, taking advantage of the VR media disruption to establish new design guidelines for educational media.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127627505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kwamina Edum-Fotwe, P. Shepherd, Matthew Brown, Dan Harper, Richard Dinnis
{"title":"Quick, unconstrained, approximate l-shape method","authors":"Kwamina Edum-Fotwe, P. Shepherd, Matthew Brown, Dan Harper, Richard Dinnis","doi":"10.1145/2945078.2945163","DOIUrl":"https://doi.org/10.1145/2945078.2945163","url":null,"abstract":"This simple paper describes an intuitive data-driven approach to reconstructing architectural building-footprints from structured or unstructured 2D pointsets. The function is fast, accurate and unconstrained. Further unlike the prevalent L-Shape detectors predicated on a shape's skeletal descriptor [Szeliski 2010], the method is robust to sensing noise at the boundary of a 2D pointset.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121425600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RatCAVE: calibration of a projection virtual reality system","authors":"Ari Rapkin Blenkhorn, Yu Wang, M. Olano","doi":"10.1145/2945078.2945091","DOIUrl":"https://doi.org/10.1145/2945078.2945091","url":null,"abstract":"We have created a suite of automated tools to calibrate and configure a projection virtual reality system. Test subjects (rats) explore an interactive computer-graphics environment presented on a large curved screen using multiple projectors. The locations and characteristics of the projectors can vary and the shape of the screen may be complex. We place several cameras around the workspace for redundant coverage. We locate each projector's hotspot as seen by each camera, and produce a brightness profile which tells the projector how much to dim each pixel of its output to achieve uniform output. We reconstruct the 3D geometry of the screen and the location of each projector using shape-from-motion and structured-light multi-camera computer vision techniques. We determine which projected pixel corresponds to a given view direction for the rat. From these, we create a warping profile for each projector, which tells it how to pre-distort its output image to appear undistorted to the rat's viewpoint. We apply both pre-distortion and hotspot correction before displaying to the screen.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133170632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PulmonaReality: transforming pediatric pulmonary function experience using virtual reality","authors":"Andrew Jacobson, J. Seo","doi":"10.1145/2945078.2945164","DOIUrl":"https://doi.org/10.1145/2945078.2945164","url":null,"abstract":"This paper presents PulmonaReality, an interactive virtual reality game aimed at young patients to help immerse them into a world that makes pulmonary function tests more enjoyable for the user while providing more reliable results for the examiner. Computer games designed to work with medical tests have been shown to have potential. While there are existing games out there, they are beginning to show their age in comparison to many games played by modern-day patients. The design of our project focuses on usability and enjoyment for young children. In our preliminary user studies, children reported that the system was easy to use with minimal instruction and evoked a sense of wonder when they experienced our different interactive 3D environments.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134326421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sculpting fluids: a new and intuitive approach to art-directable fluids","authors":"Tuur Stuyck, P. Dutré","doi":"10.1145/2945078.2945089","DOIUrl":"https://doi.org/10.1145/2945078.2945089","url":null,"abstract":"Fluid simulations are very useful for creating physically based water effects in computer graphics but are notoriously hard to control. In this talk we propose a novel and intuitive animation technique for fluid animations using interactive direct manipulation of the simulated fluid inspired by clay sculpting. Artists can simply shape the fluid directly into the desired visual effect whilst the fluid still adheres to its physical properties such as surface tension and volume preservation. Our approach is faster and much more intuitive compared to previous work which relies on indirect approaches such as providing reference geometry or density fields. It makes it very easy, even for novice users, to modify simulations ranging from enlarging splashes or altering droplet shapes to adjusting the flow of a large fluid body. The sculpted fluid shapes are incorporated into the simulation using guided re-simulation using control theory instead of simply using geometric deformations resulting in natural-looking animations.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133109485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combining multiple flow fields for editing existing fluid animations","authors":"Syuhei Sato, Y. Dobashi, T. Nishita","doi":"10.1145/2945078.2945140","DOIUrl":"https://doi.org/10.1145/2945078.2945140","url":null,"abstract":"In this paper, we develop a method for synthesizing desired flow fields by combining existing multiple flow fields. Our system allows the user to specify arbitrary regions of the precomputed flow fields and combine them to synthesize a new flow field. In order to maintain plausible physical behavior, we ensure the incompressibility for the combined flow field. To address this, we use stream functions for representing the flow fields. However, there exist discontinuities at the boundaries between the combined flow fields, resulting in unnatural animation of fluids. In order to remove the discontinuities, we apply Poisson image editing to the stream functions.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"60 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115589181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Serguei A. Mokhov, Miao Song, Jonathan Llewellyn, J. Zhang, A. Charette, Ruofan Wu, Shuiying Ge
{"title":"Real-time collection and analysis of 3-Kinect v2 skeleton data in a single application","authors":"Serguei A. Mokhov, Miao Song, Jonathan Llewellyn, J. Zhang, A. Charette, Ruofan Wu, Shuiying Ge","doi":"10.1145/2945078.2945131","DOIUrl":"https://doi.org/10.1145/2945078.2945131","url":null,"abstract":"It was not possible to do reliable 3D skeletal tracking with the currently publicly available inexpensive consumer grade hardware/software tools, such as depth cameras and their SDKs using multiple of such sensors in a single application (e.g., a game, motion recording for animation, or 3D scanning). We successfully attached 3 Kinect v2 sensors to a single application to track skeletal data without using Microsoft's Kinect 2 SDK. We created a new toolkit -- MultiCamTk++ for 3 or more Kinects v2 with skeleton support in C++. It is a successor of our previous version, MultiCamTk, done in Processing/Java that had no skeletal tracking. We achieve high resiliency and good frame rate even if 1--2 Kinects are disconnected at runtime. We are able to receive the skeleton data from the multiple sources to correlate the coordinates for spatial 3D user tracking.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116061063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jose A. S. Fonseca, Denis Kravtsov, Anargyros Sarafopoulos, J. Zhang
{"title":"Enhancement of 3D character animations through the use of automatically generated guidelines inspired by traditional art concepts","authors":"Jose A. S. Fonseca, Denis Kravtsov, Anargyros Sarafopoulos, J. Zhang","doi":"10.1145/2945078.2945114","DOIUrl":"https://doi.org/10.1145/2945078.2945114","url":null,"abstract":"Effective communication through character animation depends on the recognition of the performed body expressions. The creation of the right body postures is crucial for character animation in the context of animated films and games, as it allows for conveying the right set of emotions to the viewer. Audience needs to be able to identify familiar features mainly based on their own experiences, which allows the viewer to relate and feel empathy to observed characters. It is, therefore, crucial for the animator to accurately create the right posture and expressive body motion, during the posing phase of the animation process.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116131067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}