2011 RO-MANPub Date : 2011-08-30DOI: 10.1109/ROMAN.2011.6005259
G. Capi, H. Toda
{"title":"A new robotic system to assist visually impaired people","authors":"G. Capi, H. Toda","doi":"10.1109/ROMAN.2011.6005259","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005259","url":null,"abstract":"In this paper, we propose a new robotic system to assist visually impaired people in unknown indoor and outdoor environments. The robotic system, which is equipped with a visual sensor, laser range finders, speaker, gives visually impaired people information about the environment around them. The laser data are analyzed using the clustering technique, making it possible to detect obstacles, steps and stairs. By using the visual sensor, the system is able to distinguish between objects and humans. The PC analyses the sensors data and send information to the visually impaired people by natural language or beep signal. The usefulness of the proposed system is examined experimentally.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114367468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2011 RO-MANPub Date : 2011-08-30DOI: 10.1109/ROMAN.2011.6005269
K. Petersen, K. Fukui, Zhuohua Lin, N. Endo, Kazuki Ebihara, H. Ishii, M. Zecca, A. Takanishi, T. Asfour, R. Dillmann
{"title":"Towards high-level, cloud-distributed robotic telepresence: Concept introduction and preliminary experiments","authors":"K. Petersen, K. Fukui, Zhuohua Lin, N. Endo, Kazuki Ebihara, H. Ishii, M. Zecca, A. Takanishi, T. Asfour, R. Dillmann","doi":"10.1109/ROMAN.2011.6005269","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005269","url":null,"abstract":"In this paper we propose the basic concept of a tele-presence system for two (or more) anthropomorphic robots located in remote locations. As one robot interacts with a user, it acquires knowledge about the user's behavior and transfers this knowledge to the network. The robot in the remote location accesses this knowledge and according to this information emulates the behavior of the remote user when interacting with its partner.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114570469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2011 RO-MANPub Date : 2011-08-30DOI: 10.1109/ROMAN.2011.6005255
M. Gielniak, A. Thomaz
{"title":"Generating anticipation in robot motion","authors":"M. Gielniak, A. Thomaz","doi":"10.1109/ROMAN.2011.6005255","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005255","url":null,"abstract":"Robots that display anticipatory motion provide their human partners with greater time to respond in interactive tasks because human partners are aware of robot intent earlier. We create anticipatory motion autonomously from a single motion exemplar by extracting hand and body symbols that communicate motion intent and moving them earlier in the motion. We validate that our algorithm extracts the most salient frame (i.e. the correct symbol) which is the most informative about motion intent to human observers. Furthermore, we show that anticipatory variants allow humans to discern motion intent sooner than motions without anticipation, and that humans are able to reliably predict motion intent prior to the symbol frame when motion is anticipatory. Finally, we quantified the time range for robot motion when humans can perceive intent more accurately and the collaborative social benefits of anticipatory motion are greatest.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123494862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2011 RO-MANPub Date : 2011-08-30DOI: 10.1109/ROMAN.2011.6005281
S. Heikkilä, A. Halme
{"title":"Indirect human-robot task communication using affordances","authors":"S. Heikkilä, A. Halme","doi":"10.1109/ROMAN.2011.6005281","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005281","url":null,"abstract":"One problem in current human-robot task communication is the laborious need to define action and target object parameters for each task request. This paper's solution to the problem is to enable indirect task communication by mimicking the human cognitive ability to understand affordances, i.e. action possibilities in the environment with respect to different actors. This enables humans to communicate tasks using only the task-related action or target object names, and thus avoid the need to remember explicit task request utterances. The proposed task communication is integrated as a subsystem into an existing service robot, and its functionality is evaluated through a set of user experiments in an astronaut-robot task communication context. Affordance-based indirect task communication is shown to successfully reduce the workload experienced by the human and to decrease task communication times, while also being the preferred way to communicate tasks.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125924128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2011 RO-MANPub Date : 2011-08-30DOI: 10.1109/ROMAN.2011.6005262
Takanori Ohnuma, Geunho Lee, N. Chong
{"title":"Particle filter based feedback control of JAIST Active Robotic Walker","authors":"Takanori Ohnuma, Geunho Lee, N. Chong","doi":"10.1109/ROMAN.2011.6005262","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005262","url":null,"abstract":"We present a new control scheme of JAIST Active Robotic Walker (JARoW) developed to provide potential users such as the elderly with sufficient ambulatory capability. Toward its practical use, we tackle JARoW's easy and reliable maneuverability by creating a natural user interface between a user and JARoW. Specifically, our focus is placed on how to realize the natural and smooth movement of JARoW despite different gait parameters of users. For this purpose, a particle filtered interface function (PFIF) is proposed to estimate and predict the locations of the user's legs and body. Then, the simple feedback motion control function adjusts the motions of JARoW corresponding to the estimation and prediction. Experimental results show that the proposed control scheme can be quite satisfactory for practical use without requiring any additional user effort.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124768317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2011 RO-MANPub Date : 2011-08-30DOI: 10.1109/ROMAN.2011.6005254
John Harris, E. Sharlin
{"title":"Exploring the affect of abstract motion in social human-robot interaction","authors":"John Harris, E. Sharlin","doi":"10.1109/ROMAN.2011.6005254","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005254","url":null,"abstract":"We present our exploration of the emotional impact that abstract robot motion has on human-robot interaction (HRI). We argue for the importance of designing for the fundamental characteristics of physical robot motion as distinct from designing the robot's visual appearance or functional context. We discuss our design approach, the creation of an abstract robotic motion platform that is nearly formless and affordance-less, and our evaluation of the affect abstract motion had on more than thirty participants which interacted with our robotic platform in a series of studies. We detail our results and explain how different styles of robot motion were mapped to emotional responses in human observers. We believe that our findings can inform and provide important insight into the purposeful use of motion as a design tool in social human-robot interaction.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130053870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2011 RO-MANPub Date : 2011-08-30DOI: 10.1109/ROMAN.2011.6005223
Halit Bener Suay, S. Chernova
{"title":"Effect of human guidance and state space size on Interactive Reinforcement Learning","authors":"Halit Bener Suay, S. Chernova","doi":"10.1109/ROMAN.2011.6005223","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005223","url":null,"abstract":"The Interactive Reinforcement Learning algorithm enables a human user to train a robot by providing rewards in response to past actions and anticipatory guidance to guide the selection of future actions. Past work with software agents has shown that incorporating user guidance into the policy learning process through Interactive Reinforcement Learning significantly improves the policy learning time by reducing the number of states the agent explores. We present the first study of Interactive Reinforcement Learning in real-world robotic systems. We report on four experiments that study the effects that teacher guidance and state space size have on policy learning performance. We discuss modifications made to apply Interactive Reinforcement Learning to a real-world system and show that guidance significantly reduces the learning rate, and that its positive effects increase with state space size.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130283270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2011 RO-MANPub Date : 2011-08-30DOI: 10.1109/ROMAN.2011.6005247
D. Syrdal, K. Dautenhahn, M. Walters, K. Koay, N. Otero
{"title":"The Theatre methodology for facilitating discussion in human-robot interaction on information disclosure in a home environment","authors":"D. Syrdal, K. Dautenhahn, M. Walters, K. Koay, N. Otero","doi":"10.1109/ROMAN.2011.6005247","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005247","url":null,"abstract":"Our research is concerned with developing scenarios for robot home companions as part of the EU project LIREC. In this work, we employed a particular methodology to gain user feedback in early stages of robot prototyping: the Theatre HRI (THRI) methodology which we have recently introduced in a pilot study. Extending this work, this study used a theatre presentation to convey the user experience of domestic service robots to a group of participants and to gain their feedback in order to further refine our scenarios. The play was designed both from the perspective of projected technological development of the LIREC project, as well as for facilitating engagement with an audience of secondary school students. At the end of the play the audience was involved in a discussion regarding issues such as acceptability of the scenario and the intra-household disclosure of information by the robot. Findings suggest that this methodology was effective in eliciting discussion with the audience and that problems related to intra-household disclosure of information were best resolved by clear-cut solutions tied to ownership and clear principles.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126074820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2011 RO-MANPub Date : 2011-08-30DOI: 10.1109/ROMAN.2011.6005285
Maha Salem, K. Rohlfing, S. Kopp, F. Joublin
{"title":"A friendly gesture: Investigating the effect of multimodal robot behavior in human-robot interaction","authors":"Maha Salem, K. Rohlfing, S. Kopp, F. Joublin","doi":"10.1109/ROMAN.2011.6005285","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005285","url":null,"abstract":"Gesture is an important feature of social interaction, frequently used by human speakers to illustrate what speech alone cannot provide, e.g. to convey referential, spatial or iconic information. Accordingly, humanoid robots that are intended to engage in natural human-robot interaction should produce speech-accompanying gestures for comprehensible and believable behavior. But how does a robot's non-verbal behavior influence human evaluation of communication quality and the robot itself? To address this research question we conducted two experimental studies. Using the Honda humanoid robot we investigated how humans perceive various gestural patterns performed by the robot as they interact in a situational context. Our findings suggest that the robot is evaluated more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech. These findings were found to be enhanced when the participants were explicitly requested to direct their attention towards the robot during the interaction.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114132557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2011 RO-MANPub Date : 2011-08-30DOI: 10.1109/ROMAN.2011.6005284
S. Chernova, N. DePalma, Elisabeth Morant, C. Breazeal
{"title":"Crowdsourcing human-robot interaction: Application from virtual to physical worlds","authors":"S. Chernova, N. DePalma, Elisabeth Morant, C. Breazeal","doi":"10.1109/ROMAN.2011.6005284","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005284","url":null,"abstract":"The ability for robots to engage in interactive behavior with a broad range of people is critical for future development of social robotic applications. In this paper, we propose the use of online games as a means of generating large-scale data corpora for human-robot interaction research in order to create robust and diverse interaction models. We describe a data collection approach based on a multiplayer game that was used to collect movement, action and dialog data from hundreds of online users. We then study how these records of human-human interaction collected in a virtual world can be used to generate contextually correct social and task-oriented behaviors for a robot collaborating with a human in a similar real-world environment. We evaluate the resulting behavior model using a physical robot in the Boston Museum of Science, and show that the robot successfully performs the collaborative task and that its behavior is strongly influenced by patterns in the crowdsourced dataset.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123430875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}