{"title":"High level specification and control of communication gestures: the GESSYCA system","authors":"Thierry Lebourque, S. Gibet","doi":"10.1109/CA.1999.781196","DOIUrl":"https://doi.org/10.1109/CA.1999.781196","url":null,"abstract":"This paper describes a complete system for the specification and the generation of communication gestures. A high level language for the specification of hand-arm communication gestures has been developed. This language is based both on a discrete description of space, and on a movement decomposition inspired from sign language gestures. Communication gestures are represented through symbolic commands which can be described by qualitative data, and traduced in terms of spatiotemporal targets driving a generation system. Such an approach is possible for the class of generation models controlled through key-points information. The generation model used in our approach is composed of a set of sensory-motor servo-loops. Each of these models resolves in real time the inversion of the servo-loop, from the direct specification of location targets, while satisfying psycho-motor laws of biological movement. The whole control system is applied to the synthesis, and a validation of the synthesized movements is presented.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122473102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Won-Sook Lee, M. Escher, Gaël Sannier, N. Magnenat-Thalmann
{"title":"MPEG-4 compatible faces from orthogonal photos","authors":"Won-Sook Lee, M. Escher, Gaël Sannier, N. Magnenat-Thalmann","doi":"10.1109/CA.1999.781211","DOIUrl":"https://doi.org/10.1109/CA.1999.781211","url":null,"abstract":"MPEG-4 is scheduled to become an international standard in March 1999. The paper demonstrates an experiment for a virtual cloning method and animation system, which is compatible with the MPEG-4 standard facial object specification. Our method uses orthogonal photos (front and side view) as input and reconstructs the 3D facial model. The method is based on extracting MPEG-4 face definition parameters (FDP) from photos, which initializes a custom face in a more capable interface, and deforming a generic model. Texture mapping is employed using an image composed of the two orthogonal images, which is done completely automatically. A reconstructed head can be animated immediately inside our animation system, which is adapted to the MPEG-4 standard specification of face animation parameters (FAP). The result is integrated into our virtual human director (VHD) system.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"458 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123876263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Alhambra: a system for producing 2D animation","authors":"Domingo Martín, J. Torres","doi":"10.1109/CA.1999.781197","DOIUrl":"https://doi.org/10.1109/CA.1999.781197","url":null,"abstract":"There is a great interest in producing computer animation that looks like 2D classic animation. The flat shading, silhouettes and inside contour lines are all visual characteristics that, joined to flexible expressiveness, constitute the basic elements of 2D animation. We have developed methods for obtaining the silhouettes and interior curves from polygonal models. Virtual lights is a new method for modeling the visualization of inside curves. The need for flexibility of the model is achieved by the use of hierarchical nonlinear transformations.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131932938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. S. El-Nasr, T. Ioerger, J. Yen, D. House, F. Parke
{"title":"Emotionally expressive agents","authors":"M. S. El-Nasr, T. Ioerger, J. Yen, D. House, F. Parke","doi":"10.1109/CA.1999.781198","DOIUrl":"https://doi.org/10.1109/CA.1999.781198","url":null,"abstract":"The ability to express emotions is important for creating believable interactive characters. To simulate emotional expressions in an interactive environment, an intelligent agent needs both an adaptive model for generating believable responses, and a visualization model for mapping emotions into facial expressions. Recent advances in intelligent agents and in facial modeling have produced effective algorithms for these tasks independently. We describe a method for integrating these algorithms to create an interactive simulation of an agent that produces appropriate facial expressions in a dynamic environment. Our approach to combining a model of emotions with a facial model represents a first step towards developing the technology of a truly believable interactive agent which has a wide range of applications from designing intelligent training systems to video games and animation tools.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114941785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visible volume buffer for efficient hair expression and shadow generation","authors":"Waiming Kong, M. Nakajima","doi":"10.1109/CA.1999.781199","DOIUrl":"https://doi.org/10.1109/CA.1999.781199","url":null,"abstract":"Much research has been conducted on hair modeling and hair rendering with considerable success. However, the immense number of hair strands means that memory and CPU time requirements are very severe. To reduce the memory and the time needed for hair modeling and rendering, a visible volume buffer is proposed. Instead of using thousands of thin hairs, the memory usage and hair modeling time can be reduced by using coarse background hairs and fine surface hairs. The background hairs can be constructed by using thick hairs. To improve the look of the hair model, the background hair near the surface is broken down into numerous thin hairs and rendered. The visible volume buffer is used to determine the surface hairs. The rendering time of the background and surface hairs is found to be faster than the conventional hair model by a factor of more than four with little lost in image quality. The visible volume buffer is also used to produce shadows for the hair model.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130247475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual human animation based on movement observation and cognitive behavior models","authors":"N. Badler, D. Chi, Sonu Chopra-Khullar","doi":"10.1109/CA.1999.781206","DOIUrl":"https://doi.org/10.1109/CA.1999.781206","url":null,"abstract":"Automatically animating virtual humans with actions that reflect real human motions is still a challenge. We present a framework for animation that is based on utilizing empirical and validated data from movement observation and cognitive psychology. To illustrate these, we demonstrate a mapping from effort motion factors onto expressive arm movements, and from cognitive data to autonomous attention behaviors. We conclude with a discussion on the implications of this approach for the future of real time virtual human animation.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114200299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Multon, J. Nougaret, G. Hégron, Luc Millet, B. Arnaldi
{"title":"A software system to carry-out virtual experiments on human motion","authors":"F. Multon, J. Nougaret, G. Hégron, Luc Millet, B. Arnaldi","doi":"10.1109/CA.1999.781195","DOIUrl":"https://doi.org/10.1109/CA.1999.781195","url":null,"abstract":"This work presents a simulation system designed to carry-out virtual experiments on human motion. 3D visualization, automatic code generation and generic control design patterns provide biomechanicians and medics with dynamic simulation tools. The paper first deals with the design of mechanical models of human beings. It also presents design patterns of controllers for an upper-limb model composed of 11 degrees of freedom. As an example, two controllers are presented in order to illustrate these design patterns. The paper also presents a user-friendly interface dedicated to medics that makes it possible to enter orders in natural language.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124153769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Animation of human walking in virtual environments","authors":"Shih-kai Chung, J. Hahn","doi":"10.1109/CA.1999.781194","DOIUrl":"https://doi.org/10.1109/CA.1999.781194","url":null,"abstract":"This paper presents an interactive hierarchical motion control system dedicated to the animation of human figure locomotion in virtual environments. As observed in gait experiments, controlling the trajectories of the feet during gait is a precise end-point control task. Inverse kinematics with optimal approaches are used to control the complex relationships between the motion of the body and the coordination of its legs. For each step, the simulation of the support leg is executed first, followed by the swing leg, which incorporates the position of the pelvis from the support leg. That is, the foot placement of the support leg serves as the kinematics constraint while the position of the pelvis is defined through the evaluation of a control criteria optimization. Then, the swing leg movement is defined to satisfy two criteria in order: collision avoidance and control criteria optimization. Finally, animation attributes, such as controlling parameters and pre-processed motion modules, are applied to achieve a variety of personalities and walking styles.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130337814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time collision detection for virtual surgery","authors":"J. Lombardo, Marie-Paule Cani, Fabrice Neyret","doi":"10.1109/CA.1999.781201","DOIUrl":"https://doi.org/10.1109/CA.1999.781201","url":null,"abstract":"We present a simple method for performing real-time collision detection in a virtual surgery environment. The method relies on the graphics hardware for testing the interpenetration between a virtual deformable organ and a rigid tool controlled by the user. The method enables to take into account the motion of the tool between two consecutive time steps. For our specific application, the new method runs about a hundred times faster than the well known oriented-bonding-boxes tree method.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134488541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recursive dynamics and optimal control techniques for human motion planning","authors":"Janzen Lo, Dimitris N. Metaxas","doi":"10.1109/CA.1999.781215","DOIUrl":"https://doi.org/10.1109/CA.1999.781215","url":null,"abstract":"We present an efficient optimal control based approach to simulate dynamically correct human movements. We model virtual humans as a kinematic chain consisting of serial, closed loop, and tree-structures. To overcome the complexity limitations of the classical Lagrangian formulation and to include knowledge from biomechanical studies, we have developed a minimum-torque motion planning method. This new method is based on the use of optimal control theory within a recursive dynamics framework. Our dynamic motion planning methodology achieves high efficiency regardless of the figure topology. As opposed to a Lagrangian formulation, it obviates the need for the reformulation of the dynamic equations for different structured articulated figures. We then use a quasi-Newton method based nonlinear programming technique to solve our minimal torque-based human motion planning problem. This method achieves superlinear convergence. We use the screw theoretical method to compute analytically the necessary gradient of the motion and force. This provides a better conditioned optimization computation and allows the robust and efficient implementation of our method. Cubic spline functions have been used to make the search space for an optimal solution finite. We demonstrate the efficacy of our proposed method based on a variety of human motion tasks involving open and closed loop kinematic chains. Our models are built using parameters chosen from an anthropomorphic database. The results demonstrate that our approach generates natural looking and physically correct human motions.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128022601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}