{"title":"Unsupervised Learning, Recognition, and Generation of Time-series Patterns Based on Self-Organizing Segmentation","authors":"S. Okada, O. Hasegawa","doi":"10.1109/ROMAN.2006.314485","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314485","url":null,"abstract":"This study is intended to realize a motion recognition and generation mechanism based on observation. This mechanism, which is based on imitative learning, enables unsupervised incremental learning, recognition, and generation of time-series patterns that are observed directly from motion images. The mechanism segments these patterns into primitives in a self-organized manner using mixture-of-experts (MoE) with a non-monotonous neural network (NMNN). These patterns are expressed as permutations of primitives that are output by the MoE. Applying enhanced dynamic time warping (DTW) method recognizes these permutations of primitives. In addition, we introduce a semi-supervised learning method by applying this mechanism. We confirmed the effectiveness of this mechanism through two experiments using gestures","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115198618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of a 21-DOF Humanoid Robot to Attain Flexibility in Human-Like Motion","authors":"H. Yussof, M. Yamano, Y. Nasu, M. Ohka","doi":"10.1109/ROMAN.2006.314418","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314418","url":null,"abstract":"This paper presents a design of a 21-DOF humanoid robot from the perspective of DOFs and joint angle range characteristic to identify elements that provide flexibility to attain human-like motion. Description and correlation of physical structure flexibility between human and humanoid robot to perform motion is presented to clarify the elements. The investigation is focusing in joint structure design, configuration of DOF and joint rotation range of 21-DOF humanoid robot Bonten-Maru II. Experiments utilizing this robot were conducted, with results indicates effective elements to attain flexibility in human-like motion","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116884171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Saunders, Chrystopher L. Nehaniv, K. Dautenhahn
{"title":"Using Self-Imitation to Direct Learning","authors":"J. Saunders, Chrystopher L. Nehaniv, K. Dautenhahn","doi":"10.1109/ROMAN.2006.314425","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314425","url":null,"abstract":"An evolutionary predecessor to observational imitation may have been self-imitation. Self-imitation is where an agent is able to learn and replicate actions it has experienced through the manipulation of its body by another. This form of imitative learning has the advantage of avoiding some of the complexities encountered in observational learning such as the correspondence problem. We investigate how a system using self-imitation can be constructed with reference to psychological models of motor control including ideomotor theory and ideas from social scaffolding seen in animals to allow us to construct a robotic control system. The system allows a human trainer to teach a robot new skills and modify existing skills. Additionally the system allows the robot to notify the trainer when it is being taught skills it already possesses. We argue that this mechanism may be the first step towards the transformation from self-imitation to observational imitation. We demonstrate the system on a physical Pioneer robot with a 5-DOF arm and pan/tilt camera which is taught using self-imitation to track and point to coloured objects","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128474161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reconfigurable System Architecture for Net-accessible Pet-type Robot System","authors":"Toshiyuki Maeda","doi":"10.1109/ROMAN.2006.314477","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314477","url":null,"abstract":"This paper addresses a pet-type welfare robot system for aged people. The robot offers interactivity, which can communicate autonomously and communicate with others using Internet-connectivity, for being a partner. To avoid being satiated with conversation, we propose reconfigurability of interactivity, especially focused conversation contents. In order to watch over aged people through Internet, we have furthermore developed auto-detection alert system for aged people by checking user logs, which is also reconfigurable","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128736259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Long-term relationships as a benchmark for robot personhood","authors":"K. Macdorman, S. Cowley","doi":"10.1109/ROMAN.2006.314463","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314463","url":null,"abstract":"The human body constructs itself into a person by becoming attuned to the affective consequences of its actions in social relationships. Norms develop that ground perception and action, providing standards for appraising conduct. The body finds itself motivated to enact itself as a character in the drama of life, carving from its beliefs, intentions, and experiences a unique identity and perspective. If a biological body can construct itself into a person by exploiting social mechanisms, could an electromechanical body, a robot, do the same? To qualify for personhood, a robot body must be able to construct its own identity, to assume different roles, and to discriminate in forming friendships. Though all these conditions could be considered benchmarks of personhood, the most compelling benchmark, for which the above mentioned are prerequisites, is the ability to sustain long-term relationships. Long-term relationships demand that a robot continually recreate itself as it scripts its own future. This benchmark may be contrasted with those of previous research, which tend to define personhood in terms that are trivial, subjective, or based on assumptions about moral universals. Although personhood should not in principle be limited to one species, the most humanlike of robots are best equipped for reciprocal relationships with human beings","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127110070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Yoshikawa, K. Shinozawa, H. Ishiguro, N. Hagita, Takanori Miyamoto
{"title":"Impression Conveyance with Responsive Robot Gaze in a Conversational Situtation","authors":"Y. Yoshikawa, K. Shinozawa, H. Ishiguro, N. Hagita, Takanori Miyamoto","doi":"10.1109/ROMAN.2006.314370","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314370","url":null,"abstract":"In face-to-face communication, eyes play a central role, for example, in directing attention and regulating turn-taking. For this reason, the eyes have been a central topic in several fields of interaction study. Although many psychological findings have encouraged previous work in both human-computer and human-robot interaction studies, so far there have been few explorations on how to move the agent's eye, including when to move it, for communication. Therefore, we investigate this topic in this study. The impression a person forms from an interaction is strongly influenced by the degree to which their partner's gaze direction correlates with their own. In this paper, we evaluate two primitive ways of controlling a robot's gaze responsively to its partner's gaze by comparison with two additional ways of non-responsive gaze control in an experiment set in a conversational situation. Consequently, we confirm that robots with different types of responsive gaze control can give subjects different impressions of its characteristics","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130484319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Categorizing Autonomic Nervous System (ANS) Emotional Signals using Bio-Sensors for HRI within the MAUI Paradigm","authors":"Christine L. Lisetti, Fatma Nasoz","doi":"10.1109/ROMAN.2006.314430","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314430","url":null,"abstract":"In this article, we discuss the strong relationship between affect and cognition and the importance of emotions in multimodal human computer interaction (HCI) and user-modeling. We introduce the overall paradigm for our multi-modal system that aims at recognizing its users' emotions and at responding to them accordingly depending upon the current context or application. We then describe the design of the emotion elicitation experiment we conducted by collecting, via wearable computers, physiological signals from the autonomic nervous system (galvanic skin response, heart rate, temperature) and mapping them to certain emotions (sadness, anger, fear, surprise, frustration, and amusement). We show the results of three different supervised learning algorithms that categorize these collected signals in terms of emotions, and generalize their learning to recognize emotions from new collections of signals. We finally discuss possible broader impact and possible applications of emotion recognition for multimodal intelligent systems","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125281680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Roillo: Creating a Social Robot for Playrooms","authors":"Marek P. Michalowski, S. Šabanović, P. Michel","doi":"10.1109/ROMAN.2006.314453","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314453","url":null,"abstract":"In this paper, we introduce Roillo, a social robotic platform for investigating, in the context of children's playrooms, questions about how to design compelling nonverbal interactive behaviors for social robots. Specifically, we are interested in the importance of rhythm to natural interactions and its role in the expression of affect, attention, and intent. Our design process has consisted of rendering, animation, surveys, mechanical prototyping, and puppeteered interaction with children","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"13 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132360508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Syrdal, K. Dautenhahn, Sarah N. Woods, M. Walters, K. Koay
{"title":"'Doing the right thing wrong' - Personality and tolerance to uncomfortable robot approaches","authors":"D. Syrdal, K. Dautenhahn, Sarah N. Woods, M. Walters, K. Koay","doi":"10.1109/ROMAN.2006.314415","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314415","url":null,"abstract":"The study presented in this paper explored the relationships between subject personality and preferences in the direction from which a robot approached the human participants (N=42) in order to deliver an object in a naturalistic `living room' setting. Personality was assessed using the Big Five Domain Scale. No consistent significant relationships were found between personality traits and preferred approach directions; however, a consistent nonsignificant trend was found in which high scores on the personality trait extraversion was associated with a higher degree of tolerance to the approach directions rated overall as most uncomfortable. The implications of the results are discussed both from a theoretical and methodological viewpoint","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128040127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Kuno, Hiroyuki Sekiguchi, Toshio Tsubota, Shota Moriyama, K. Yamazaki, Akiko Yamazaki
{"title":"Museum Guide Robot with Communicative Head Motion","authors":"Y. Kuno, Hiroyuki Sekiguchi, Toshio Tsubota, Shota Moriyama, K. Yamazaki, Akiko Yamazaki","doi":"10.1109/ROMAN.2006.314391","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314391","url":null,"abstract":"Face or head movement plays an important role in human communication. This paper presents a museum guide robot that moves its head to communicate smoothly with humans. We have analyzed the behavior of human guides when they explain exhibits to visitors. Then, we have developed a robot system that can recognize the human's face movement using vision. The robot turns its head depending on the human's face direction and the contents of utterances. We use the analysis results of human behavior to control the head movements. Experimental results show that it is effective for the guide robot to turn its head while explaining exhibits","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"34 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132558899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}