Avinash Ranganath, Avishek Biswas, Ioannis Karamouzas, V. Zordan
{"title":"Motor Babble: Morphology-Driven Coordinated Control of Articulated Characters","authors":"Avinash Ranganath, Avishek Biswas, Ioannis Karamouzas, V. Zordan","doi":"10.1145/3487983.3488291","DOIUrl":"https://doi.org/10.1145/3487983.3488291","url":null,"abstract":"Locomotion in humans and animals is highly coordinated, with many joints moving together. Learning similar coordinated locomotion in articulated virtual characters, in the absence of reference motion data, is a challenging task due to the high number of degrees of freedom and the redundancy that comes with it. In this paper, we present a method for learning locomotion for virtual characters in a low dimensional latent space which defines how different joints move together. We introduce a technique called motor babble, wherein a character interacts with its environment by actuating its joints through uncoordinated, low-level (motor) excitations, resulting in a corpus of motion data from which a manifold latent space is extracted. Dimensions of the extracted manifold define a wide variety of synergies pertaining to the character and, through reinforcement learning, we train the character to learn locomotion in the latent space by selecting a small set of appropriate latent dimensions, along with learning the corresponding policy.","PeriodicalId":170509,"journal":{"name":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132323813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donald E. Egan, George Fletcher, Yiguo Qiao, D. Cosker, R. Mcdonnell
{"title":"How to train your dog: Neural enhancement of quadruped animations","authors":"Donald E. Egan, George Fletcher, Yiguo Qiao, D. Cosker, R. Mcdonnell","doi":"10.1145/3487983.3488293","DOIUrl":"https://doi.org/10.1145/3487983.3488293","url":null,"abstract":"Creating realistic quadruped animations is challenging. Producing realistic animations using methods such as key-framing is time consuming and requires much artistic expertise. Alternatively, motion capture methods have their own challenges (getting the animal into a studio, attaching motion capture markers, and getting the animal to put on the desired performance) and the resulting animation will still most likely require cleaning up. It would be useful if an animator could provide an initial rough animation and in return be given a corresponding high quality realistic one. To this end, we present a deep-learning approach for the automatic enhancement of quadruped animations. Given an initial animation, possibly lacking the subtle details of true quadruped motion and/or containing small errors, our results show that it is possible for a neural network to learn how to add these subtleties and correct errors to produce an enhanced animation while preserving the semantics and context of the initial animation. Our work also has potential uses in other applications, for example, its ability to be used in real-time means it could form part of a quadruped embodiment system.","PeriodicalId":170509,"journal":{"name":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","volume":"141 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124882092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Constraint-based Formulation of Stable Neo-Hookean Materials","authors":"M. Macklin, Matthias Müller","doi":"10.1145/3487983.3488289","DOIUrl":"https://doi.org/10.1145/3487983.3488289","url":null,"abstract":"In computer graphics, soft body simulation is often used to animate soft tissue on characters or rubber like objects. Both are highly incompressible, however commonly used models such as co-rotational FEM, show significant volume loss, even under moderate strain. The Neo-Hookean model has recently become popular in graphics. It has superior volume conservation, recovers from inverted states, and does not require a polar decomposition. However, solvers for Neo-Hookean finite-element problems are typically based on Newton methods, which require energy Hessians, their Eigen-decomposition, and sophisticated linear solvers. In addition, minimizing the energy directly in this way does not accommodate modeling incompressible materials since it would require infinitely stiff forces. In this paper we present a constraint-based model of the Neo-Hookean energy. By decomposing the energy into deviatoric (distortional), and hydrostatic (volume preserving) constraints, we can apply iterative constrained-optimization methods that require only first-order gradients. We compare our constraint-based formulation to state-of-the-art force-based solvers and show that our method is often an order of magnitude more efficient for stiff volume preserving materials.","PeriodicalId":170509,"journal":{"name":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126402579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robin Adili, Benjamin Niay, Katja Zibrek, A. Olivier, J. Pettré, Ludovic Hoyet
{"title":"Perception of Motion Variations in Large-Scale Virtual Human Crowds","authors":"Robin Adili, Benjamin Niay, Katja Zibrek, A. Olivier, J. Pettré, Ludovic Hoyet","doi":"10.1145/3487983.3488288","DOIUrl":"https://doi.org/10.1145/3487983.3488288","url":null,"abstract":"Virtual human crowds are regularly featured in movies and video games. With a large number of virtual characters each behaving in their own way, spectacular scenes can be produced. The more diverse the characters and their behaviors are, the more realistic the virtual crowd is expected to be perceived. Hence, creating virtual crowds is a trade-off between the cost associated with acquiring more diverse assets, namely more virtual characters with their animations, and achieving better realism. In this paper, our focus is on the perceived variety in virtual crowd character motions. We present an experiment exploring whether observers are able to identify virtual crowds including motion clones in the case of large-scale crowds (from 250 to 1000 characters). As it is not possible to acquire individual motions for such numbers of characters, we rely on a state-of-the-art motion variation approach to synthesize unique variations of existing examples for each character in the crowd. Participants then compared pairs of videos, where each character was animated either with a unique motion or using a subset of these motions. Our results show that virtual crowds with more than two motions (one per gender) were perceptually equivalent, regardless of their size. We believe these findings can help create efficient crowd applications, and are an additional step into a broader understanding of the perception of motion variety.","PeriodicalId":170509,"journal":{"name":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123878328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emulating Foveated Path Tracing","authors":"A. Polychronakis, G. Koulieris, K. Mania","doi":"10.1145/3487983.3488295","DOIUrl":"https://doi.org/10.1145/3487983.3488295","url":null,"abstract":"At full resolution, path tracing cannot be deployed in real-time based on current graphics hardware due to slow convergence times and noisy outputs, despite recent advances in denoisers. In this work, we develop a perceptual sandbox emulating a foveated path tracer to determine the eccentricity angle thresholds that enable imperceptible foveated path tracing. In a foveated path tracer the number of rays fired can be decreased, and thus performance can be increased. For this study, due to current hardware limitations prohibiting real-time path-tracing for multiple samples-per-pixel, we pre-render image buffers and emulate foveated rendering as a post-process by selectively blending the pre-rendered content, driven by an eye tracker capturing eye motion. We then perform three experiments to estimate conservative thresholds of eccentricity boundaries for which image manipulations are imperceptible. Contrary to our expectation of a single threshold across the three experiments, our results indicated three different average thresholds, one for each experiment. We hypothesise that this is due to the dissimilarity of the methodologies, i.e., A-B testing vs sequential presentation vs custom adjustment of eccentricities affecting the perceptibility of peripheral blur among others. We estimate, for the first time for path tracing, specific thresholds of eccentricity that limit any perceptual repercussions whilst maintaining high performance. We perform an analysis to determine potential computational complexity reductions due to foveation in path tracing. Our analysis shows a significant boost in path-tracing performance (≥ 2x − 3x) using our foveated rendering method as a result of the reduction in the primary rays.","PeriodicalId":170509,"journal":{"name":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121999421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Catherine Taylor, M. Evans, Eleanor Crellin, M. Parsons, D. Cosker
{"title":"Ego-Interaction: Visual Hand-Object Pose Correction for VR Experiences","authors":"Catherine Taylor, M. Evans, Eleanor Crellin, M. Parsons, D. Cosker","doi":"10.1145/3487983.3488290","DOIUrl":"https://doi.org/10.1145/3487983.3488290","url":null,"abstract":"Immersive virtual reality (VR) experiences may track both a user’s hands and a physical object at the same time and use the information to animate computer generated representations of the two interacting. However, to render visually without artefacts requires highly accurate tracking of the hands and the objects themselves as well as their relative locations – made even more difficult when the objects are articulated or deformable. If this tracking is incorrect, then the quality and immersion of the visual experience is reduced. In this paper we turn the problem around – instead of focusing on producing quality renders of hand-object interactions by improving tracking quality, we acknowledge there will be tracking errors and just focus on fixing the visualisations. We propose a Deep Neural Network (DNN) that modifies hand pose based on its relative position with the object. However, to train the network we require sufficient labelled data. We therefore also present a new dataset of hand-object interactions – Ego-Interaction. This is the first hand-object interaction dataset with egocentric RGBD videos and 3D ground truth data for both rigid and non-rigid objects. The Ego-Interaction dataset contains 92 sequences with 4 rigid, 1 articulated and 4 non-rigid objects and demonstrates hand-object interactions with 1 and 2 hands carefully captured, rigged and animated using motion capture. We provide our dataset as a general resource for researchers in the VR and AI community interested in other hand-object and egocentric tracking related problems.","PeriodicalId":170509,"journal":{"name":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129210343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel S. Sohn, Mihee Lee, Seonghyeon Moon, Gang Qiao, Muhammad Usman, Sejong Yoon, V. Pavlovic, Mubbasir Kapadia
{"title":"A2X: An Agent and Environment Interaction Benchmark for Multimodal Human Trajectory Prediction","authors":"Samuel S. Sohn, Mihee Lee, Seonghyeon Moon, Gang Qiao, Muhammad Usman, Sejong Yoon, V. Pavlovic, Mubbasir Kapadia","doi":"10.1145/3487983.3488302","DOIUrl":"https://doi.org/10.1145/3487983.3488302","url":null,"abstract":"In recent years, human trajectory prediction (HTP) has garnered attention in computer vision literature. Although this task has much in common with the longstanding task of crowd simulation, there is little from crowd simulation that has been borrowed, especially in terms of evaluation protocols. The key difference between the two tasks is that HTP is concerned with forecasting multiple steps at a time and capturing the multimodality of real human trajectories. A majority of HTP models are trained on the same few datasets, which feature small, transient interactions between real people and little to no interaction between people and the environment. Unsurprisingly, when tested on crowd egress scenarios, these models produce erroneous trajectories that accelerate too quickly and collide too frequently, but the metrics used in HTP literature cannot convey these particular issues. To address these challenges, we propose (1) the A2X dataset, which has simulated crowd egress and complex navigation scenarios that compensate for the lack of agent-to-environment interaction in existing real datasets, and (2) evaluation metrics that convey model performance with more reliability and nuance. A subset of these metrics are novel multiverse metrics, which are better-suited for multimodal models than existing metrics. The dataset is available at: https://mubbasir.github.io/HTP-benchmark/.","PeriodicalId":170509,"journal":{"name":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124602321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ESPEFs: Exponential Spring Potential Energy Functions for Simulating Deformable Objects","authors":"O. Cetinaslan","doi":"10.1145/3487983.3488303","DOIUrl":"https://doi.org/10.1145/3487983.3488303","url":null,"abstract":"Extended Position-based Dynamics (XPBD) is a well-known method to carry out the simulation of deformable objects. It extends the Position-based Dynamics (PBD) algorithm with a compliance parameter for the material stiffness and implicitly adapts the damping function within the Gauss-Seidel iteration. Although the XPBD method improves upon PBD, it can be cumbersome to fine-tune the required parameters for the desired material properties of the deformable objects. In this paper, we introduce the exponential spring potential energy functions (ESPEFs) for the XPBD simulation of the deformable objects with reduced parameter adjustments. Our method reformulates the well-known spring potential energy functions on an exponential basis which provides more vivid motion during physics-based simulations. ESPEFs enrich the hyperelasticity of the deformable models without any additional effort while the classical methods require cumbersome parameter tunings with trial-and-error tests. To demonstrate the benefits of ESPEFs, we extensively compare our simulation results with the well-known spring models, strain-based dynamics including the constitutive materials and the output of another common iterative solver (Projective Dynamics). The resulting approach is simple, stable, interactive and produces visually pleasing results.","PeriodicalId":170509,"journal":{"name":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128398873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PFPN: Continuous Control of Physically Simulated Characters using Particle Filtering Policy Network","authors":"Pei Xu, Ioannis Karamouzas","doi":"10.1145/3487983.3488301","DOIUrl":"https://doi.org/10.1145/3487983.3488301","url":null,"abstract":"Data-driven methods for physics-based character control using reinforcement learning have been successfully applied to generate high-quality motions. However, existing approaches typically rely on Gaussian distributions to represent the action policy, which can prematurely commit to suboptimal actions when solving high-dimensional continuous control problems for highly-articulated characters. In this paper, to improve the learning performance of physics-based character controllers, we propose a framework that considers a particle-based action policy as a substitute for Gaussian policies. We exploit particle filtering to dynamically explore and discretize the action space, and track the posterior policy represented as a mixture distribution. The resulting policy can replace the unimodal Gaussian policy which has been the staple for character control problems, without changing the underlying model architecture of the reinforcement learning algorithm used to perform policy optimization. We demonstrate the applicability of our approach on various motion capture imitation tasks. Baselines using our particle-based policies achieve better imitation performance and speed of convergence as compared to corresponding implementations using Gaussians, and are more robust to external perturbations during character control. Related code is available at: https://motion-lab.github.io/PFPN.","PeriodicalId":170509,"journal":{"name":"Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games","volume":"1 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113989739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}