{"title":"Task Learning Using Graphical Programming and Human Demonstrations","authors":"S. Ekvall, D. Aarno, D. Kragic","doi":"10.1109/ROMAN.2006.314466","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314466","url":null,"abstract":"The next generation of robots will have to learn new tasks or refine the existing ones through direct interaction with the environment or through a teaching/coaching process in programming by demonstration (PbD) and learning by instruction frameworks. In this paper, we propose to extend the classical PbD approach with a graphical language that makes robot coaching easier. The main idea is based on graphical programming where the user designs complex robot tasks by using a set of low-level action primitives. Different to other systems, our action primitives are made general and flexible so that the user can train them online and therefore easily design high level tasks","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133408476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic control of a monitoring camera for remote manipulator operation","authors":"M. Niwa, N. Tsuda, N. Kato, Y. Nomura, H. Matsui","doi":"10.1109/ROMAN.2006.314410","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314410","url":null,"abstract":"When a robot is operated remotely, a monitoring camera is often used. In conventional systems, an operator controls both a robot and a monitoring camera. A camera has to be controlled so that its zooming scale and angle are to be desirable for an operator. Therefore, the automatic control of a camera is necessary for smooth operations of a robot. In this paper, the authors proposed a new algorithm to control a monitoring camera automatically. While an operator sends commands to an operated manipulator via a joystick, a monitoring camera is automatically controlled based on the algorithm where the joystick motion information is ingeniously utilized. Experiments were carried out, and an effectiveness of the proposed algorithm was confirmed","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130203850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Content-based control of goal-directed attention during human action perception","authors":"Y. Demiris, B. Khadhouri","doi":"10.1109/ROMAN.2006.314422","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314422","url":null,"abstract":"During the perception of human actions by robotic assistants, the robotic assistant needs to direct its computational and sensor resources to relevant parts of the human action. In previous work we have introduced HAMMER (Hierarchical Attentive Multiple Models for Execution and Recognition) in Demiris, Y. and Khadhouri, B., (2006), a computational architecture that forms multiple hypotheses with respect to what the demonstrated task is, and multiple predictions with respect to the forthcoming states of the human action. To confirm their predictions, the hypotheses request information from an attentional mechanism, which allocates the robot's resources as a function of the saliency of the hypotheses. In this paper we augment the attention mechanism with a component that considers the content of the hypotheses' requests, with respect to reliability, utility and cost. This content-based attention component further optimises the utilisation of the resources while remaining robust to noise. Such computational mechanisms are important for the development of robotic devices that will rapidly respond to human actions, either for imitation or collaboration purposes","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"6 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131381963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Teaching a Humanoid Robot to Recognize and Reproduce Social Cues","authors":"S. Calinon, A. Billard","doi":"10.1109/ROMAN.2006.314458","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314458","url":null,"abstract":"In a robot programming by demonstration framework, several demonstrations of a task are required to generalize and reproduce the task under different circumstances. To teach a task to the robot, explicit pointers are required to signal the start/end of a demonstration and to switch between the learning/reproduction phases. Coordination of the learning system can be achieved by adding social cues to the interaction process. Here, we propose to use an imitation game to teach a humanoid robot to recognize communicative gestures, which then serve as social signals in a pointing-at-objects scenario. The system is based on hidden Markov models (HMMs) and use motion sensors to track the user's gestures","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134025564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Behavior Analysis of Children's Touch on a Small Humanoid Robot: Long-term Observation at a Daily Classroom over Three Months","authors":"F. Tanaka, J. Movellan","doi":"10.1109/ROMAN.2006.314491","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314491","url":null,"abstract":"We have been conducting HRI (human-robot interaction) studies with the basic principle of design by immersion, which suggests the importance of researchers moving themselves into unconstrained daily-life environments. This is crucial for design and development of social robots that interact and assist people in the daily real-world. In this paper we report findings on a study where a small humanoid robot kept attending at a nursery school on a daily basis for more than three months. We focus on children's touch behavior on the robot, and conduct video analyses based on six categories related to the touch behavior. Results tell us important conditions for designing every-day robots.","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129991129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"KANSEI Robotics to Open a New Epoch of Human-Machine Relationship - Machine with a Heart -","authors":"S. Hashimoto","doi":"10.1109/ROMAN.2006.314385","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314385","url":null,"abstract":"Information processing technology can be classified into three historical phases. The first phase is the physical information processing to treat physical data from the real world. This technology is often called \"signal processing\". As the target is mainly physical signal such as sound, brightness and force, the base of the processing is the law of nature. The reality of the system is examined by the physical explanationability of the input-output relations and the causality is important. The second phase is the logical information processing to deal with knowledge and rule. \"Artificial intelligence\" is included in this phase where symbols and languages are processed according to the rule. The most important is consistency and provability here. We are entering the third phase of information processing that is \"KANSEI\" information processing to treat human emotion. \"KANSEI\" is a Japanese word expressing some subjective concept including \"sensibility\", \"feelings\", \"intuitiveness\" and \"mood\". The term \"human factor\" has a close relation with KANSEI. However, in the KANSEI information processing, we intend to approach the human emotional world more positively by applying computer technology to the affective information processing and human communication including art, musical performance and entertainment","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114138400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Otero, S. Knoop, Chrystopher L. Nehaniv, D. Syrdal, K. Dautenhahn, R. Dillmann
{"title":"Distribution and Recognition of Gestures in Human-Robot Interaction","authors":"N. Otero, S. Knoop, Chrystopher L. Nehaniv, D. Syrdal, K. Dautenhahn, R. Dillmann","doi":"10.1109/ROMAN.2006.314402","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314402","url":null,"abstract":"This paper presents an approach for human activity recognition focusing on gestures in a teaching scenario, together with the setup and results of user studies on human gestures exhibited in unconstrained human-robot interaction (HRI). The user studies analyze several aspects: the distribution of gestures, relations, and characteristics of these gestures, and the acceptability of different gesture types in a human-robot teaching scenario. The results are then evaluated with regard to the activity recognition approach. The main effort is to bridge the gap between human activity recognition methods on the one hand and naturally occuring or at least acceptable gestures for HRI on the other. The goal is two-fold: to provide recognition methods with information and requirements on the characteristics and features of human activities in HRI, and to identify human preferences and requirements for the recognition of gestures in human-robot teaching scenarios","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127025609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bilge Mutlu, Steven Osman, J. Forlizzi, J. Hodgins, S. Kiesler
{"title":"Task Structure and User Attributes as Elements of Human-Robot Interaction Design","authors":"Bilge Mutlu, Steven Osman, J. Forlizzi, J. Hodgins, S. Kiesler","doi":"10.1109/ROMAN.2006.314397","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314397","url":null,"abstract":"Recent developments in humanoid robotics have made possible technologically advanced robots and a vision for their everyday use as assistants in the home and workplace. Nonetheless, little is known about how we should design interactions with humanoid robots. In this paper, we argue that adaptation for user attributes (in particular gender) and task structure (in particular a competitive vs. a cooperative structure) are key design elements. We experimentally demonstrate how these two elements affect the user's social perceptions of ASIMO after playing an interactive video game with him","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127042959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The I-SWARM project","authors":"H. Wörn, Marc Szymanski, J. Seyfried","doi":"10.1109/ROMAN.2006.314376","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314376","url":null,"abstract":"This paper describes the initial work of the EC-funded project I-SWARM which aims to take a leap forward in robotics research, in distributed and adaptive systems as well as in understanding self-organizing biological swarm systems. The next step from today's micro robotics research will be the mass-production of micro robots, which can then be employed as a \"real\" swarm consisting of up to 1,000 robot clients. These clients will all be equipped with limited, pre-rational on-board intelligence. The swarm will consist of a huge number of heterogeneous robots, differing in the type of sensors, manipulators and computational power or behaviours and programs. Such a robot swarm can be employed for a variety of applications, including micro assembly nano-handling, biological, medical or cleaning tasks","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115941995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Koay, Z. Zivkovic, B. Kröse, K. Dautenhahn, M. Walters, N. Otero, A. Alissandrakis
{"title":"Methodological Issues of Annotating Vision Sensor Data using Subjects' Own Judgement of Comfort in a Robot Human Following Experiment","authors":"K. Koay, Z. Zivkovic, B. Kröse, K. Dautenhahn, M. Walters, N. Otero, A. Alissandrakis","doi":"10.1109/ROMAN.2006.314396","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314396","url":null,"abstract":"When determining subject preferences for human-robot interaction, an important issue is the interpretation of the subjects' responses during the trials. Employing a non-intrusive approach, this paper discusses the methodological issues for annotating vision data by allowing the subjects to indicate their comfort using a handheld comfort level device during the trials. In previous research, the analysis of collected comfort and vision data was made difficult due to problems concerning the manual synchronization of different modalities. In the current paper, we overcome this issue by real-time integration of the subject's feedback on subjective comfort into the video stream. The implications for more efficient analysis of human-robot interaction data, as well as possible future developments of this approach are discussed","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115730857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}