M. Ohka, Hiroaki Kobayashi, Jumpei Takata, Y. Mitsuya
{"title":"Sensing Precision of an Optical Three-axis Tactile Sensor for a Robotic Finger","authors":"M. Ohka, Hiroaki Kobayashi, Jumpei Takata, Y. Mitsuya","doi":"10.1109/ROMAN.2006.314420","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314420","url":null,"abstract":"We are developing an optical three-axis tactile sensor capable of acquiring normal and shearing force, with the aim of mounting it on a robotic finger. The tactile sensor is based on the principle of an optical waveguide-type tactile sensor, which is composed of an acrylic hemispherical dome, a light source, an array of rubber sensing elements, and a CCD camera. The sensing element of silicone rubber comprises one columnar feeler and eight conical feelers. The contact areas of the conical feelers, which maintain contact with the acrylic dome, detect the three-axis force applied to the tip of the sensing element. Normal and shearing forces are then calculated from integration and centroid displacement of the gray-scale value derived from the conical feeler's contacts. To evaluate the present tactile sensor, we have conducted a series of experiments using a y-z stage, a rotational stage, and a force gauge, and have found that although the relationship between the integrated gray-scale value and normal force depends on the sensor's latitude on the hemispherical surface, it is easy to modify the sensitivity according to the latitude, and that the centroid displacement of the gray-scale value is proportional to the shearing force. When we examined repeatability of the present tactile sensor with 1,000 load-unload cycles, the respective error of the normal and shearing forces was 2 and 5%","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134325335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks","authors":"N. Mitsou, Spyros Velanas, C. Tzafestas","doi":"10.1109/ROMAN.2006.314411","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314411","url":null,"abstract":"With the spread of low-cost haptic devices, haptic interfaces appear in many areas in the field of robotics. Recently, haptic devices have been used in the field of mobile robot teleoperation, where mobile robots operate in unknown and dangerous environments performing particular tasks. Haptic feedback is shown to improve operator perception of the environment without, however, improving exploration time. In this paper, we present a haptic interface that is used to teleoperate a mobile robot in exploring polygonal environments. The proposed visuo-haptic interface is found to improve navigation time and operator perception of the remote environment. The human-operator can simultaneously select two different commands, the first one being set as \"active\" motion command, while the second one is set as a \"guarded\" motion type of navigation command. The user can feel a haptic equivalent for both types of teleguidance motion commands, and can also observe in real-time the sequential creation of the remote environment map. Comparative evaluation experiments show that the proposed system makes the task of remote navigation of unknown environments easier","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"304 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114598849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Gesture Recognition for Human-Robot Symbiosis","authors":"M. Bhuiyan","doi":"10.1109/ROMAN.2006.314445","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314445","url":null,"abstract":"This paper presents a vision based gesture recognition system for human-robot symbiosis. The system is based on visual information of the face gestures recognition by connected component analysis of the skin color segmentation of images in HSV color model and PCA based pattern-matching strategies. On gesture recognition, the robot is being instructed to perform certain tasks by issuing commands. The system has been demonstrated with the implementation of the algorithm to interact with a robot named, AIBO-MIN for human-robot symbiotic relationship","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123365357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relationship between Utterance Dynamics and Pragmatics in the Conversation of Consensus Building Process","authors":"Makoto Yoshida, Y. Miyake","doi":"10.1109/ROMAN.2006.314472","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314472","url":null,"abstract":"We measured the turn taking process in a dialogue for building consensus between two subjects. Additionally, the temporal development of cycle and response time of utterance was analyzed and we investigated their conversational dynamics. As a result, temporal development of turn taking in the consensus building process and its typical development were clarified","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122690124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling Affect in Socially Interactive Robots","authors":"Rachel Gockley, R. Simmons, J. Forlizzi","doi":"10.1109/ROMAN.2006.314448","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314448","url":null,"abstract":"Humans use expressions of emotion in a very social manner, to convey messages such as \"I'm happy to see you\" or \"I want to be comforted,\" and people's long-term relationships depend heavily on shared emotional experiences. We believe that for robots to interact naturally with humans in social situations they should also be able to express emotions in both short-term and long-term relationships. To this end, we have developed an affective model for social robots. This generative model attempts to create natural, human-like affect and includes distinctions between immediate emotional responses, the overall mood of the robot, and long-term attitudes toward each visitor to the robot. This paper presents the general affect model as well as particular details of our implementation of the model on one robot, the Roboceptionist","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125288457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Gatzoulis, Anastis Sourgoutsidis, V. Hurmusiadis, Wen Tang
{"title":"Adaptive Social Skills for Robots Interacting with Virtual Characters in Real Worlds","authors":"C. Gatzoulis, Anastis Sourgoutsidis, V. Hurmusiadis, Wen Tang","doi":"10.1109/ROMAN.2006.314387","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314387","url":null,"abstract":"We propose the implementation of a new interaction type that allows the creation of adaptive social relationships between robots and virtual characters in a real world environment, using reinforcement learning. We present the implementation of a storytelling scenario, which results in an immersion experience for the robot. The robot is able to interact and learn dynamically from the virtual character","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128258617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Clarizio, Irene Mazzotta, Nicole Novielli, F. D. Rosis
{"title":"Social Attitude Towards A Conversational Character","authors":"G. Clarizio, Irene Mazzotta, Nicole Novielli, F. D. Rosis","doi":"10.1109/ROMAN.2006.314386","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314386","url":null,"abstract":"This paper describes our experience with the design, implementation and validation of a user model for adapting health promotion dialogs with ECAs to the attitude of users toward the agent. The model was conceived in agreement with the theory of social emotions in communication. It integrates a linguistic parser with a dynamic Bayesian network and was learnt from a corpus of data collected with a Wizard of Oz study","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128654704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Prosocial Response to Emotive Facial Expression of Interactive Agent","authors":"Yugo Takeuchi, Takuro Hada","doi":"10.1109/ROMAN.2006.314479","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314479","url":null,"abstract":"In the scene of a face to face conversation, people exchange not only verbal messages but also non-verbal information such as facial expressions, which express such mental conditions as their mood, emotion, or attitude. The purpose of this study is to examine the degree of sympathy affected by human toward an interactive agent displaying CG generated facial expressions through a psychological experiment. Our experiments have shown us that prosocial behavior was observed when, in response to a negative mood induced by horrible or frightful pictures unconditionally shown, subjects gave more assists to the agent whose facial expression expresses the mood congruous to that of the subjects","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124536718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hye-Jin Kim, Keun-Chang Kwak, Soo-Young Chi, Young-Jo Cho
{"title":"AR-KLT based Hand Tracking","authors":"Hye-Jin Kim, Keun-Chang Kwak, Soo-Young Chi, Young-Jo Cho","doi":"10.1109/ROMAN.2006.314467","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314467","url":null,"abstract":"This paper proposes a novel real-time robust hand tracking algorithm, integrating multi-cues, and a limb's degree of freedom. For this purpose, we construct a limb model and maintain the model obtained from KLT-AR methods with respect to second-order auto-regression model and Kanade-Lucas-Tomasi (KLT) features, respectively. Furthermore, this method provides directivity of a target, enabling us to predict the next motion. Thus, we can develop a method of hand tracking for gesture and behavior recognition techniques frequently used in conjunction with human-robot interaction (HRI) components. The experimental results show that the proposed method yields a good performance in the intelligent service robots, so called Wever developed in ETRI","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130878828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Otero, Chrystopher L. Nehaniv, D. Syrdal, K. Dautenhahn
{"title":"Naturally Occurring Gestures in a Human-Robot Teaching Scenario","authors":"N. Otero, Chrystopher L. Nehaniv, D. Syrdal, K. Dautenhahn","doi":"10.1109/ROMAN.2006.314444","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314444","url":null,"abstract":"This paper describes our general framework for the investigation of how human gestures can be used to facilitate the interaction and communication between humans and robots. More specifically, a study was carried out to reveal which \"naturally occurring\" gestures can be observed in a scenario where users had to explain to a robot how to perform a specific home task. The study followed a within-subjects design where ten participants had to demonstrate how to lay a table for two people using two different methods for their explanation: utilizing only gestures or gestures and speech. The experiments also served to validate a new coding scheme for human gestures in human-robot interaction, with good inter-rater reliability. Moreover, annotated video corpus was produced and characteristics such as frequency, duration, and co-occurrence of the different gestural classes have been gathered in order to capture requirements for the designers of HRI systems. The results regarding the frequencies of the different gestural types suggest an interaction between the order of presentation of the two methods and the actual type of gestures produced. Moreover, the results also suggest that there might be an interaction between the type of task and the type of gestures produced","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131594099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}