{"title":"Body expression in virtual environments","authors":"L. Emering, R. Boulic, D. Thalmann","doi":"10.1109/ROMAN.1999.900336","DOIUrl":"https://doi.org/10.1109/ROMAN.1999.900336","url":null,"abstract":"As technology improves, virtual reality interfaces based on body actions and expressions are becoming more and more important. In the domain of games, an evident application is the control of the hero by body actions. The robotics domain is another area where such an interface presents an attractive issue, especially in the tele-presence field. In this paper we discuss an action model along with a real-time recognition system and show how it may be used in virtual environments.","PeriodicalId":200240,"journal":{"name":"8th IEEE International Workshop on Robot and Human Interaction. RO-MAN '99 (Cat. No.99TH8483)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121386461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive behavior for cooperation: a virtual reality application","authors":"C. Sanza, C. Panatier, H. Luga, Y. Duthen","doi":"10.1109/ROMAN.1999.900318","DOIUrl":"https://doi.org/10.1109/ROMAN.1999.900318","url":null,"abstract":"We present a behavioral system based on artificial life for animating actors in a virtual reality application. Through a virtual soccer game, we show how a set of autonomous players (called agents) can cooperate and communicate to perform common tasks. The user is immersed in the game. He/she interacts with the other agents and is integrated in the cooperation and communication systems. Every entity reads in real-time by using a classifiers system which is composed of a set of binary rules and a reward system. The originality of such method is the ability to build a behavior (by emergence) without initial knowledge. The analysis of the simulation gives interesting results: after convergence, the global behavior of the teams produces coherent movements. Moreover, the introduction of disturbances does not affect the performances of the classifiers system.","PeriodicalId":200240,"journal":{"name":"8th IEEE International Workshop on Robot and Human Interaction. RO-MAN '99 (Cat. No.99TH8483)","volume":"76 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127181871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experiments with a steady hand robot in constrained compliant motion and path following","authors":"Rajesh Kumar, P. Jensen, R. Taylor","doi":"10.1109/ROMAN.1999.900321","DOIUrl":"https://doi.org/10.1109/ROMAN.1999.900321","url":null,"abstract":"We consider the problem of cooperative manipulation to improve positioning and path following abilities of humans. Using a specially designed actuated manipulator and \"steady hand\" manipulation we report on compliant path following strategies and their experimental evaluation. Detecting lines and simple curves by processing images from an endoscope mounted on the robot, we traverse these curves autonomously, under direct user control, and in an augmented mode of user control. Anisotropic gains based on gradient information from the imaging reduce errors in path traversal.","PeriodicalId":200240,"journal":{"name":"8th IEEE International Workshop on Robot and Human Interaction. RO-MAN '99 (Cat. No.99TH8483)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130465464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of augmented force information on the skill acquisition of teleoperation","authors":"D. Chikura, M. Takahashi, M. Kitamura","doi":"10.1109/ROMAN.1999.900319","DOIUrl":"https://doi.org/10.1109/ROMAN.1999.900319","url":null,"abstract":"In the present study, the effect of augmented force information on skill development was studied experimentally by using a robot manipulator operated by a joystick with force feedback mechanism. The main issue studied here is to clarify and possibly quantify the effect of augmented force information on skill development by novice operators and how well the operators trained with force support can perform task without the force information.","PeriodicalId":200240,"journal":{"name":"8th IEEE International Workshop on Robot and Human Interaction. RO-MAN '99 (Cat. No.99TH8483)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130651422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual, tactile and gazing line - action linkage system for 3D shape evaluation in virtual space","authors":"M. Okubo, T. Watanabe","doi":"10.1109/ROMAN.1999.900316","DOIUrl":"https://doi.org/10.1109/ROMAN.1999.900316","url":null,"abstract":"A new virtual system for shape evaluation is proposed, which integrates the visual information in virtual space and the tactile and gazing line-action information in real space. The preference of 3D shape images in the proposed virtual space is compared with that of the real photoforming products made from the same data in 28 subjects by the sensory evaluation of paired comparison and the questionnaire analysis. It was found that the preference for shapes was consistent in both spaces in the relation of preference among shapes based on the Bradley-Terry model in sensory evaluation. This indicates that the proposed system provides almost the same environment as the real space for shape evaluation. The results of questionnaire also indicate that the proposed system is adequate to evaluate the 3D shapes in virtual space. The system enables one to investigate the human's sense for shape evaluation by handling the visual, tactile and gazing fine-action information.","PeriodicalId":200240,"journal":{"name":"8th IEEE International Workshop on Robot and Human Interaction. RO-MAN '99 (Cat. No.99TH8483)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115305718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robot control of a 3500 tonne mining machine","authors":"Jonathan M. Roberts, G. Winstanley, Peter Corke","doi":"10.1109/ROMAN.1999.900342","DOIUrl":"https://doi.org/10.1109/ROMAN.1999.900342","url":null,"abstract":"The mining industry is highly suitable for the application of robotics and automation technology since the work is arduous, dangerous and often repetitive. This paper describes the development of an automation system for a physically large and complex field robotic system - a 3,500 tonne mining machine (a dragline). The major components of the system are discussed with a particular emphasis on the machine/operator interface. A very important aspect of this system is that it must work cooperatively with a human operator, seamlessly passing the control back and forth in order to achieve the main aim - increased productivity.","PeriodicalId":200240,"journal":{"name":"8th IEEE International Workshop on Robot and Human Interaction. RO-MAN '99 (Cat. No.99TH8483)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114853301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic extraction of facial organs and recognition of facial expressions","authors":"H. Kobayashi, S. Suzuki, H. Takahashi","doi":"10.1109/ROMAN.1999.900334","DOIUrl":"https://doi.org/10.1109/ROMAN.1999.900334","url":null,"abstract":"This paper deals with the method to realize automatic contour extraction of facial organs such as eyebrows, eyes and mouth for the deformed face, and automatic categorization and recognition of facial expressions by using unsupervised neural network. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. We also define the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. We use the transformation value of the control points on the elastic contour model as a facial information for the neural net training and for the recognition test obtained from 20 subjects in terms of 6 typical facial expressions. We found that a 76% correct recognition rate was achieved.","PeriodicalId":200240,"journal":{"name":"8th IEEE International Workshop on Robot and Human Interaction. RO-MAN '99 (Cat. No.99TH8483)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125625868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A 4-legged mobile robot control to observe a human behavior","authors":"T. Kiriki, Y. Kimuro, Tsutomu Hasegawa","doi":"10.1109/ROMAN.1999.900339","DOIUrl":"https://doi.org/10.1109/ROMAN.1999.900339","url":null,"abstract":"Describes a 4-legged walking robot that is navigated by human gesture. The navigation process is composed of two phases. The first is a human following phase. To follow a human, the robot tracks a human head by a visual tracking system. The latter phase is the recognition of human gestures. To achieve reliable and real-time recognition of the human gesture, we focus on such gestures expressed by cyclically repeated motion of the hands. We show how these gestures are recognized successfully to control the 4 legged robot.","PeriodicalId":200240,"journal":{"name":"8th IEEE International Workshop on Robot and Human Interaction. RO-MAN '99 (Cat. No.99TH8483)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123055983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"oRis: an agents communications language for distributed virtual environments","authors":"V. Rodin, A. Nédélec","doi":"10.1109/ROMAN.1999.900311","DOIUrl":"https://doi.org/10.1109/ROMAN.1999.900311","url":null,"abstract":"In this paper, we present oRis which is a multiagent language. This language is particularly well adapted to the creation of applications in distributed virtual environments. oRis language is agent oriented (active object) and interpreted. The interpreter is written in C++. Then, oRis allows a deep coupling with C++ language. Moreover, due to the oRis/C++ coupling, oRis may be connected to other languages. For example, we are able to call Java for network communication. In its basic version, oRis authorizes different types of communications between agents located on the same machine: synchronous, asynchronous and diffusion. Thus, we have developed a communication layer allowing communication among distant agents located on different machines.","PeriodicalId":200240,"journal":{"name":"8th IEEE International Workshop on Robot and Human Interaction. RO-MAN '99 (Cat. No.99TH8483)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122005019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ferrazzin, G. Di Domizio, F. Salsedo, C. Avizzano, F. Tecchia, M. Bergamasco
{"title":"Hall effect sensor-based linear transducer","authors":"D. Ferrazzin, G. Di Domizio, F. Salsedo, C. Avizzano, F. Tecchia, M. Bergamasco","doi":"10.1109/ROMAN.1999.900343","DOIUrl":"https://doi.org/10.1109/ROMAN.1999.900343","url":null,"abstract":"This paper presents a new type of transducer based on Hall effect sensors. This transducer is actually used at PERCRO as a small low cost, low displacement friction system for tracking hand position. A description of several displacement sensors as well as the Hall effect is reported. The mathematical model was tested by means of an experimental apparatus. The optimal parameters were determined on the apparatus and used for the realisation of a prototypal sensor.","PeriodicalId":200240,"journal":{"name":"8th IEEE International Workshop on Robot and Human Interaction. RO-MAN '99 (Cat. No.99TH8483)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126335185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}