H. Masuta, Hun-ok Lim, T. Motoyoshi, K. Koyanagi, T. Oshima
{"title":"Direct perception and action system for unknown object grasping","authors":"H. Masuta, Hun-ok Lim, T. Motoyoshi, K. Koyanagi, T. Oshima","doi":"10.1109/ROMAN.2015.7333637","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333637","url":null,"abstract":"This paper discusses the direct perception for grasping and the action for perception using depth sensor. The previous method should recognize the accurate physical parameters to grasp an unknown object. Hence, we propose a sensation based perception which perceives exclusively the relevant information to the behavior of the robot. To perceive an unknown object, we have proposed the plane detection based approach of the SPD-SE. The SPD-SE may have applicability to robot perception at real time, and has advantages that the point group and properties of an unknown object can be extracted at the same time. The sensation of grasping is explained by inertia tensor and fuzzy inference. The sensation of grasping affords the possibility of action to a robot directly without inference from physical information such as size, posture and shape. As experimental results, we show that the robot can detect a relevant information of a grasping behavior directly. And, the sensation of grasping presents own state and environmental state together by one parameter.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134394234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mitsuhiko Kimoto, T. Iio, M. Shiomi, I. Tanev, K. Shimohara, N. Hagita
{"title":"Improvement of object reference recognition through human robot alignment","authors":"Mitsuhiko Kimoto, T. Iio, M. Shiomi, I. Tanev, K. Shimohara, N. Hagita","doi":"10.1109/ROMAN.2015.7333672","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333672","url":null,"abstract":"This paper reports an interactive approach to improve the recognition performance by robots of objects indicated by humans during human-robot interaction. We developed an approach based on two findings in conversations where a human refers to an object, which is confirmed by a robot. First, humans tend to use the same words or gestures as the robot in a phenomenon called alignment. Second, humans tend to decrease the amount of information in their references when the robot uses excess information in its confirmations: in other words, alignment inhibition. These findings lead to the following design; a robot should use enough information without being excessive to identify objects to improve recognition accuracy because humans will eventually use similar information to refer to those objects by alignment. If humans more frequently use the same information to identify objects, the robot can more easily recognize those being indicated by humans. To verify our design, we developed a robotic system to recognize the objects to which humans referred and conducted a control experiment that had 2 × 3 conditions; one factor was the robot's confirmation way and another was the arrangement of the objects. The first factor had two levels to identify objects: enough information and excess information. The second factor had three levels: congestion, two groups, and a sparse set. We measured the recognition accuracy of the objects humans referred to and the amount of information in their references. The success rate of the recognition and information amount was higher in the adequate information condition than in the excess condition in a particular situation. The results suggested the possibility that our proposed interactive approach improved recognition performance.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"31 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134522612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The far side of the uncanny valley: ‘Healthy persons’, androids, and radical uncertainty","authors":"David Silvera Tawil, Michael Garbutt","doi":"10.1109/ROMAN.2015.7333567","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333567","url":null,"abstract":"This paper explores the uncanny valley in a possible future when humans and androids are indistinguishable from each other. In this setting, humans will by definition experience a condition of radical uncertainty. We investigate the strategies people use to attempt to distinguish between human-like entities, both humans and androids. Experimental results demonstrate that visual and contextual information are important during entity identification, but under conditions of radical uncertainty, a feeling of discomfort and rejection towards human-like entities - both humans and androids - may be experienced.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"84 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131495886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masaya Ogawa, K. Honda, Yoshihiro Sato, S. Kudoh, Takeshi Oishi, K. Ikeuchi
{"title":"Motion generation of the humanoid robot for teleoperation by task model","authors":"Masaya Ogawa, K. Honda, Yoshihiro Sato, S. Kudoh, Takeshi Oishi, K. Ikeuchi","doi":"10.1109/ROMAN.2015.7333619","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333619","url":null,"abstract":"In recent years, the research of humanoid robots that replace human tasks in emergency situations have been widely studied. Currently, many approaches are automate dedicated hardware for each mission. But, at the environment where situation changes, operation by humanoid robot is effective to operate equipments which designed for human. Ultimately, automation is ideal, but under the present circumstances, teleoperation of humanoid robot is effective for corresponding changes of situation. An intuitive interface is required for effectively controlling the humanoid robot from a distant place. Recently, the interfaces that map the human motion to the humanoid robot have become popular because of the development of the motion recognition systems. However, the humanoid robot and human beings have different joint structure, physical ability and weight balance. It is not practical to map the motion directly. There is also the issue of time delay between the operator and the robot. Therefore, it is desirable that the operator performs global judgments and the robot runs semi-autonomously in the local environment. In this paper we propose a method to remotely operate the humanoid robot by the task model. Our method describes human behavior abstractly by the task model and mapped this abstract expressions to humanoid robots, and overcome difference of structure of body. In this work, we operate lever of buggy-type vehicles as a example of mapping using the task model.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132180225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Conti, S. Nuovo, S. Buono, Grazia Trubia, A. D. Nuovo
{"title":"Use of robotics to stimulate imitation in children with Autism Spectrum Disorder: A pilot study in a clinical setting","authors":"D. Conti, S. Nuovo, S. Buono, Grazia Trubia, A. D. Nuovo","doi":"10.1109/ROMAN.2015.7333589","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333589","url":null,"abstract":"Autism Spectrum Disorder (ASD) is a condition in which deficits in social interaction and social communication can make everyday life difficult. The use of mechanical and electronic devices has proven to be effective in ASD therapy and recently Socially Assistive Robotics (SAR) research has suggested that robots are promising tools for the treatment of this disorder. Starting from these findings, our on-going research aims to identify effective modalities for treatment of ASD through interaction with a robot, and to integrate them into existing therapeutic protocols to improve their efficacy. In this paper we present some preliminary findings of our current work towards this objective. We detail the methodology and give the results of a pilot clinical trial, focused on imitation skills, with three children affected by ASD and Intellectual Disability (ID) under treatment in a research centre specialized in the care of children with disabilities. The success of the experiment suggests that the robot can be effectively integrated in the ASD therapies currently used in the centre. Analysis of these initial results encourages the development of effective protocols in which the robot becomes a mediator between the child with ASD and humans and suggests some research avenues for focus in the future.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121089299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Young children's preconceived notions about robots, and how beliefs may trigger children's thinking and response to robots","authors":"Sandra Y. Okita","doi":"10.1109/ROMAN.2015.7333690","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333690","url":null,"abstract":"This paper examines young children's beliefs and preconceived notions about robots. Robots have several features (e.g., boundary-like features, human-like features, and room for imagination) that may elicit social responses and trigger serious thinking in children. An in-depth interview was conducted with 77 children between the ages 4- to 7-years old to examine how they perceive and understand robots. The findings revealed the type of prior knowledge and beliefs children revert to, and how age influenced how they see and interpret robots. The findings may assist researchers when designing human-robot interaction with young children.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126763593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a synchronised Grammars framework for adaptive musical human-robot collaboration","authors":"Miguel Sarabia, Kyuhwa Lee, Y. Demiris","doi":"10.1109/ROMAN.2015.7333649","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333649","url":null,"abstract":"We present an adaptive musical collaboration framework for interaction between a human and a robot. The aim of our work is to develop a system that receives feedback from the user in real time and learns the music progression style of the user over time. To tackle this problem, we represent a song as a hierarchically structured sequence of music primitives. By exploiting the sequential constraints of these primitives inferred from the structural information combined with user feedback, we show that a robot can play music in accordance with the user's anticipated actions. We use Stochastic Context-Free Grammars augmented with the knowledge of the learnt user's preferences. We provide synthetic experiments as well as a pilot study with a Baxter robot and a tangible music table. The synthetic results show the synchronisation and adaptivity features of our framework and the pilot study suggest these are applicable to create an effective musical collaboration experience.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127915349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Incremental knowledge acquisition for human-robot collaboration","authors":"Batbold Myagmarjav, M. Sridharan","doi":"10.1109/ROMAN.2015.7333666","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333666","url":null,"abstract":"Human-robot collaboration in practical domains typically requires considerable domain knowledge and labeled examples of objects and events of interest. Robots frequently face unforeseen situations in such domains, and it may be difficult to provide labeled samples. Active learning algorithms have been developed to allow robots to ask questions and acquire relevant information when necessary. However, human participants may lack the time and expertise to provide comprehensive feedback. The incremental active learning architecture described in this paper addresses these challenges by posing questions with the objective of maximizing the potential utility of the response from humans who lack domain expertise. Candidate questions are generated using contextual cues, and ranked using a measure of utility that is based on measures of information gain, ambiguity and human confusion. The top-ranked questions are used to update the robot's knowledge by soliciting answers from human participants. The architecture's capabilities are evaluated in a simulated domain, demonstrating a significant reduction in the number of questions posed in comparison with algorithms that use the individual measures or select questions randomly from the set of candidate questions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121560429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of legible autonomous leading behavior based on dogs' approach","authors":"Soh Takahashi, M. Gácsi, P. Korondi, M. Niitsuma","doi":"10.1109/ROMAN.2015.7333687","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333687","url":null,"abstract":"This study addressed the leading behavior of a robot. To lead a person when the person's attention is initially elsewhere, the robot's behavior should be designed such that it seeks the person's attention and seamlessly brings him or her to a target location. Therefore, we set out to design and implement a leading behavior for a robot inspired by the dogs' behavior sequence. In particular, we considered the legibility of the movements used by the robot to show a robot's destination clearly. We evaluated the autonomous robot's leading behavior through an experiment.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125676717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attractive telepresence communication with movable and touchable display robot","authors":"Masa Ogata, R. Teramura, M. Imai","doi":"10.1109/ROMAN.2015.7333609","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333609","url":null,"abstract":"We propose an active display robot for tele-communication system combining 3-axis display arm manually controlled by touch input, and automatically controlled by human tracking. This system is designed to present distance change behavior between user and robot by implementing two types of user interaction during communication; 1) Providing robot's movement to follow the user who is far from or going to pass by the display robot, 2) providing touchable display to activate and manipulate the direction of the display during the user is sited in front of the display system. Both local and remote users use the same system to communicate, make user's attention to distanced user via the system. Manipulating the movable part of the display by the movement of the finger of the user's touch makes the display correspond to user's intention and attention. There are two advantages of the movable display by the operation of the user's touch. One is to enhance remote communication by making the display arm moving with touch operation. Second is to enable the transmission of non-verbal information through the operation of the display. We designed hardware and software associated with user behavior and user input via the display. It is expected that the system will solve that the problem of misunderstanding of remote device operation, then it contribute to intuitive operation. In order to evaluate how this system does contribute to intuitive operation, we conducted an assessment experiment by using Questionnaire of Likert scale and interview. We verified that the display can move in response to operator's touch input contribute to intuitive experience.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128688576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}