{"title":"Emotional posturing: a method towards achieving emotional figure animation","authors":"D. J. Densley, P. Willis","doi":"10.1109/CA.1997.601034","DOIUrl":"https://doi.org/10.1109/CA.1997.601034","url":null,"abstract":"Putting emotion into figure animation is a difficult task. The paper describes a method towards solving this problem. An emotional model is proposed based on psychological theory and this is integrated into the posturing of the figure. The system is based on general posturing functions which are interpreted depending on the emotional state of the figure. These functions can affect individual joints but are typically used to modify the movement of areas of the body and general stance.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129097742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic 3D cloning and real-time animation of a human face","authors":"M. Escher, N. Magnenat-Thalmann","doi":"10.1109/CA.1997.601040","DOIUrl":"https://doi.org/10.1109/CA.1997.601040","url":null,"abstract":"We describe our techniques for the automatic cloning of a human face which can be animated in real time using both video and audio inputs. Our system can be used for video conferencing and telecooperative work at various sites with shared virtual environment inhabited by virtual clones. This work is part of a European project VIDAS (AC057). A generic face model is used which is modified to fit to the real face. Primarily, two aspects are considered: (1) modeling involving the construction of a 3D texture mapped face model fitting to the real face; (2) animation of the new constructed face. Automatic texture fitting is employed to produce the virtual face with texture mapping from the real face image. A model independent facial animation module provides real time animation. The system allows integration of audio and video inputs and produces a synchronized visual and acoustic output.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127778983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic behaviours for computer animation: the use of Java","authors":"I. Palmer","doi":"10.1109/CA.1997.601059","DOIUrl":"https://doi.org/10.1109/CA.1997.601059","url":null,"abstract":"This paper describes research involving the use of Java to implement dynamic behavioural control of animated objects. Java ideal for this application because it is an object-oriented language that allows dynamic extension and reconfiguration. The system uses sets of external classes for each actor (called 'evaluators') that modify arrays of parameters passed to them. This provides a flexible method of controlling objects by specifying object data in terms of arrays of numerical values and then using evaluators to modify these. The implementation allows these external classes to be loaded either from a known repository for evaluator classes (the simplest scheme) or by using a 'ClassLoader' to load classes from locations specified at run-time. A search agent can be used to find the classes that march a specification stored in a pre-defined format, and the use of partial matching can yield interesting side-effects on unspecified parameters. The scheme is therefore dynamically re-configurable with the possibility of actors in an animation finding and changing their behaviour over the lifetime of the animation by locating and retrieving new evaluator classes. A test-bed has been developed for the scheme that uses simple VRML geometries controlled by the behaviours.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126687659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessment criteria for 2D shape transformations in animation","authors":"Jinhui Yu, J. Patterson","doi":"10.1109/CA.1997.601049","DOIUrl":"https://doi.org/10.1109/CA.1997.601049","url":null,"abstract":"The assessment of 2D shape transformations (or morphing) for animation is a difficult task because it is a multi-dimensional problem. Existing morphing techniques pay most attention to shape information interactive control and mathematical simplicity. This paper shows that it is not enough to use shape information alone, and we should consider other factors such as structure, dynamics, timing, etc. The paper also shows that an overall objective assessment of morphing is impossible because factors such as timing are related to subjective judgement, yet local objective assessment criteria, e.g. based on shape, are available. We propose using \"area preservation\" as the shape criterion for the 2D case as an acceptable approximation to \"volume preservation\" in reality, and use it to establish cases in which a number of existing techniques give clearly incorrect results. The possibility of deriving objective assessment criteria for dynamics simulations and timing under certain conditions is discussed.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124799034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast collision detection among multiple moving spheres","authors":"Dong-Jin Kim, L. Guibas, Sung-yong Shin","doi":"10.1109/CA.1997.601033","DOIUrl":"https://doi.org/10.1109/CA.1997.601033","url":null,"abstract":"The paper presents an event driven approach that efficiently detects collisions among multiple moving spheres of uniform radius. We divide the space containing the spheres into uniform subspaces of cell structure. Each sphere intersecting a subspace is tested against the others intersecting the same subspace for possible collisions. We identify three types of events to detect the sequence of all collisions during our simulation: collision, entering, and leaving. The first type of events is due to actual collisions, and the other two types occur when spheres move from subspace to subspace. By tracing all such events in the order of their occurring times, we are able to simulate n moving spheres with proper collision response in O(n/sub c/ log n+n/sub e/ log n) time with O(n) space after O(n log n) time preprocessing, where n/sub c/ and n/sub e/ are the number of actual collisions and that of entering and leaving events during the simulation, respectively. Since n/sub e/ depends on the size of subspaces, we adopt the collision model from kinetic theory for molecular gas (Feynmann et al., 1963) to determine the subspace size that minimizes simulation time. Experimental results show that collision detection can be done in linear time in n over a large range.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131606454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual images as a \"line-test\" to real image animation","authors":"A. Valente, Ó. Mealha","doi":"10.1109/CA.1997.601060","DOIUrl":"https://doi.org/10.1109/CA.1997.601060","url":null,"abstract":"We present in this article a \"Line-test\" solution for use in animation of real image. The \"line-test\" process is based on the use and manipulation of the virtual image and animation to predict the animation that will be executed in real image. We present a solution, that uses laser beams to indicate the position of the object/subject to animate in each photogram, according to the coordinates established in the virtual animation prediction process.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131085260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Collaborative environments of Intelligent Box for distributed 3D graphic applications","authors":"Y. Okada, Yuzuru Tanaka","doi":"10.1109/CA.1997.601036","DOIUrl":"https://doi.org/10.1109/CA.1997.601036","url":null,"abstract":"The paper treats a constructive visual software development system for interactive 3D graphic applications. Our system, Intelligent Box represents any objects as reactive 3D visual objects, which are called Boxes that can be manually combined with other Boxes. It provides a uniform framework for the concurrent definition of both geometrical compound structures among Boxes and their mutually interactive functional linkages. It works as a user friendly rapid prototyping software development system. We have introduced a collaborative environment into the Intelligent Box system as a function of a particular Box called a RoomBox for distributed 3D graphic applications. Multiple RoomBoxes on different computers share specific user operation events with each other. Those RoomBoxes virtually provide some users with a shared 3D space. Already developed Boxes work in a shared 3D space without any modifications. Therefore, it is possible to construct distributed 3D graphic applications rapidly and easily by using RoomBoxes. The paper introduces the RoomBox, describes its mechanism and shows a few application examples.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133579711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interaction between real and virtual humans in augmented reality","authors":"Selim Balcisoy, D. Thalmann","doi":"10.1109/CA.1997.601037","DOIUrl":"https://doi.org/10.1109/CA.1997.601037","url":null,"abstract":"Interaction between real and virtual humans covers a wide range of topics from creation and animation of virtual actors to computer vision techniques for data acquisition from real world. We discuss the design and implementation of an augmented reality system which allows investigation of different real/virtual interaction aspects. As an example we present an application to create real time interactive drama with real and virtual actors.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123501487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Formal description of auditory scenes","authors":"A. Darvishi, H. Schauer","doi":"10.1109/CA.1997.601042","DOIUrl":"https://doi.org/10.1109/CA.1997.601042","url":null,"abstract":"This paper introduces the concept called auditory scenes, which is a tool for high semantic description of everyday sounds, and two grammar approaches based on this concept. It does not consider auditory scene analysis, which describes the ability of listeners to separate the acoustic events arriving from different environmental sources into separate perceptual representations (streams). The concept of auditory scenes relies on our everyday perception of sounds in daily life. It assumes various perceptual attributes such as duration, volume, position in space, and others, for each individual sound in the scene as well as the temporal, spatial and other relationships between them. Different temporal relationships and some theoretical considerations regarding these issues are presented in depth. The two grammars represent (1) the hierarchical and (2) the autonomous concept. The first grammar approach is similar to music composition and is basically a temporal composition of everyday sounds. By contrast, the second grammar approach defines the event driven non-hierarchical composition of everyday sounds. Before these grammars are discussed, an introduction to the use of sounds in man-machine interaction and the concept of auditory scenes are presented.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134484871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CASUS; an object-oriented three-dimensional animation system for event-oriented simulators","authors":"Volker Luckas, Tanja Broll","doi":"10.1109/CA.1997.601057","DOIUrl":"https://doi.org/10.1109/CA.1997.601057","url":null,"abstract":"This paper describes an object-oriented approach to linking a three-dimensional visualization to an event-oriented simulator. The goal is an extremely realistic visualization which is, therefore well equipped for customer-oriented presentations. This is achieved by using computer-generated three-dimensional animation. In realizing this approach, a modular concept is suggested which allows-aside from offering an easily establishable link to a variety of simulators-the most automated animation possible. The animation system CASUS, a synonym for Computer Animation of Simulation Traces, is discussed at length, as well as the necessary communication tools.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127032014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}