{"title":"Layered modular action control for communicative humanoids","authors":"K. Thórisson","doi":"10.1109/CA.1997.601055","DOIUrl":"https://doi.org/10.1109/CA.1997.601055","url":null,"abstract":"Face-to-face interaction between people is generally effortless and effective. We exchange glances, take turns speaking and make facial and manual gestures to achieve the goals of the dialogue. This paper describes an action composition and selection architecture for synthetic characters capable of full-duplex, real-time face-to-face interaction with a human. This architecture is part of a computational model of psychosocial dialogue skills, called Y_m_i_r_, that bridges between multimodal perception and multimodal action generation. To test the architecture, a prototype humanoid has been implemented, named G_a_n_d_a_ l_f_, who commands a graphical model of the solar system and can engage in task-directed dialogue with people using speech, manual and facial gesture. Gandalf has been tested in interaction with users and has been shown capable of fluid turn-taking and multimodal dialogue. The primary focus in this paper will be on the action selection mechanisms and low-level composition of motor commands. An overview is also given of the Ymir model and Gandalf's graphical representation.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115665481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An animation interface designed for motion capture","authors":"T. Molet, Zhiyong Huang, R. Boulic, D. Thalmann","doi":"10.1109/CA.1997.601044","DOIUrl":"https://doi.org/10.1109/CA.1997.601044","url":null,"abstract":"We present an animation interface designed to conveniently control the motion capture process. The transition of performer's hand gestures tracked by a dataglove is recognized for software remote control. An intuitive camera metaphor allows one to specify the viewpoint location using the magnetic sensors strapped to the performer's head. The human motion capture is based on the Anatomical Converter, a toolkit to convert sensor measurements into human anatomical rotations in real time. An improved human motion capture technique, the multi-joint control, is introduced.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121515580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dirichlet free-form deformations and their application to hand simulation","authors":"L. Moccozet, N. Magnenat-Thalmann","doi":"10.1109/CA.1997.601047","DOIUrl":"https://doi.org/10.1109/CA.1997.601047","url":null,"abstract":"Presents a generalized method for free-form deformations (FFDs) that combines the traditional FFD model with techniques of scattered data interpolation based on Delaunay and Dirichlet/Voronoi diagrams. This technique offers many advantages over traditional FFDs, including simple control of local deformations. It also keeps all the capabilities of FFD extensions, such as extended FFDs and direct FFDs. The deformation model has much potential for 3D modeling and animation. We choose to illustrate this with a non-trivial human simulation task: hand animation. We implement a multi-layer deformation model where Dirichlet FFDs (DFFDs) are used to simulate the intermediate layer between the skeleton and the skin.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126218783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Arc-length-based axial deformation and length preserved animation","authors":"Qunsheng Peng, Xiaogang Jin, Jieqing Feng","doi":"10.1109/CA.1997.601045","DOIUrl":"https://doi.org/10.1109/CA.1997.601045","url":null,"abstract":"In real life, some objects may deform along axial curves and the lengths of their skeletons usually remain constant during the axial deformation, such as a swimming fish, a swaying tree, etc. This paper presents a practical approach of arc-length-based axial deformation and axial-length-preserved animation. The space spanned by the arc-length parameter and the rotation-minimizing frame on the axis is taken as the embedding space. During animation, the keyframe axial curves are consistently approximated by polylines after sufficient subdivisions and both the edge lengths and the directional vertex angles of the keyframe polylines (or unit edge vectors) are then interpolated to generate the intermediate polylines which are regarded as the discrete expressions of the intermediate axes. Experiments show that our method is very useful, intuitive and easy to control.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134377032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PROVIS: a platform for virtual prototyping and maintenance tests","authors":"O. Balet, H. Luga, Y. Duthen, R. Caubet","doi":"10.1109/CA.1997.601038","DOIUrl":"https://doi.org/10.1109/CA.1997.601038","url":null,"abstract":"Prototype design and testing is an indispensable stage of any project development in many fields of activity, such as aeronautical, spatial, automotive industries or architecture. Scientists and engineers rely on prototyping for a visual confirmation and validation of both their ideas and concepts. The paper describes the design and implementation of PROVIS, a system for prototyping virtual systems in a collaborative way. The main goal of our works is to allow designers to replace physical mock-ups by virtual ones in order to test the integration and space requirements of the model components. To this end, PROVIS combines a \"desktop VR\" interface with an interactive 3D display. While discussing its software and hardware architecture, the paper describes how PROVIS allows users to interact with flexible or articulated objects, and to test maintenance procedures and component accessibility thanks to an original use of genetic algorithms.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116194937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Behavioural simulation in voxel space","authors":"Hongwen Zhang, B. Wyvill","doi":"10.1109/CA.1997.601052","DOIUrl":"https://doi.org/10.1109/CA.1997.601052","url":null,"abstract":"We present a framework for behavioural simulation. A uniform voxel space representation is used to implement the environment mechanism of the framework. An example environment is presented where actors with olfactory sensors are able to direct their motions according to a scent of the chemicals in the voxel space based on mass transfer theory. Objects in the environment are scan converted to the voxel representation to facilitate collision detection. An example of using the framework to simulate the behaviour of a group of artificial butterflies is used to demonstrate the ideas of this research.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128164005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Shinagawa, Jun-ichi Nakajima, T. Kunii, Kazuhiro Hara
{"title":"Capturing and analyzing stability of human body motions using video cameras","authors":"Y. Shinagawa, Jun-ichi Nakajima, T. Kunii, Kazuhiro Hara","doi":"10.1109/CA.1997.601039","DOIUrl":"https://doi.org/10.1109/CA.1997.601039","url":null,"abstract":"The need for capturing human body motions has been increasing recently for making movies, sports instruction systems and robots that can simulate human motions. The paper proposes a method to facilitate motion capturing using inexpensive video cameras. In our system, a few cameras are used to obtain multiple views of a human body and a three dimensional (3D) volume consistent with the views is created. A model of the human body is then fitted to the volume to obtain the configuration of the human body. We also propose a method to analyze the stability of human postures. We have analyzed a technique of the traditional Chinese martial art, Shorinji Kempo, based on stability to show the effectiveness of our method.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116607581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Animating flow fields: rendering of oriented line integral convolution","authors":"R. Wegenkittl, E. Gröller, W. Purgathofer","doi":"10.1109/CA.1997.601035","DOIUrl":"https://doi.org/10.1109/CA.1997.601035","url":null,"abstract":"Line Integral Convolution (LIC) is a common approach for the visualisation of 2D vector fields. It is well suited for visualizing the direction of a flow field, but it gives no information about the orientation of the underlying vectors. We introduce Oriented Line Integral Convolution (OLIC), where direction as well as orientation are encoded within the resulting image. This is achieved by using a sparse input texture and a ramp like (anisotropic) convolution kernel. This method can be used for animations, whereby the computation of so called pixel traces fastens the calculation process. Various OLICs illustrating simple and real world vector fields are shown.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129830372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Autonomous behavior control of virtual actors based on the AIR model","authors":"Jun'ichi Sato, T. Miyasato","doi":"10.1109/CA.1997.601051","DOIUrl":"https://doi.org/10.1109/CA.1997.601051","url":null,"abstract":"We developed a mechanism in which the behavior of actors in computer-generated virtual space is determined in response their interaction with each other. We call this mechanism the Autonomous Interactive Reaction model (AIR model). The AIR model is a simplified mechanism for determining human-like behavior expressed in terms of parameters for emotion and personality, and an actor's behavior is determined using these parameters. The AIR model adjusts these parameters in response to the communication between actors, so each actor's behavior is determined as a result of his interaction with other actors.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122481208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multilevel modelling of virtual urban environments for behavioural animation","authors":"S. Donikian","doi":"10.1109/CA.1997.601054","DOIUrl":"https://doi.org/10.1109/CA.1997.601054","url":null,"abstract":"The Praxitele project is responsible for the design of a new kind of transportation in an urban environment, which consists of a fleet of electric public cars. These public cars are capable of autonomous motion on given journeys between stations. The realization of such a project requires experimentations with the behaviour of autonomous vehicles in the urban environment. Because of the danger connected with these kinds of experiments in a real site, it,vas necessary to design a virtual urban environment in which simulations could be done. Reproducing the real traffic of a city, as completely as possible, implies the simulation of autonomous entities like living beings. Such entities are able to perceive their environment, to communicate with other creatures and to execute some actions either on themselves or on their environment. Databases for virtual environments are often confined to the geometric level, when they must also contain physical, topological and semantic information. Accordingly in this paper we present VUEMS, a Virtual Urban Environment Modelling System which is designed to manage different levels of representation by assembling geometric, topological and semantic data in the field of traffic simulation. From real world data (when available), we construct a Model of the Virtual Urban Environment, integrating all the information needed to describe the realistic behaviour of car drivers.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128574887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}