{"title":"Basic controls for generating robots behaviors in HRI contexts","authors":"A. Meddahi, R. Chellali","doi":"10.1109/ROMAN.2014.6926281","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926281","url":null,"abstract":"In this contribution, we describe an unified controller we implemented in order to simplify creating the common interactive behaviors that the robot may performs with human users. This controller is based on a multi-objective optimization technique allowing simultaneous achievement of different constraints and goals. The core of the developed algorithm is based on the particle swarm optimization technique (PSO). Through some examples, namely, a wheeled manipulator following human in populated and cluttered environments, and a humanoid robot reaching and tracking objects. we show how the PSO is effective in solving the given problems in flexible and generic ways. We detail in this paper the algorithm and give some results obtained from simulations and field trials.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"83 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116359892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of blame on trust in human robot interaction","authors":"Poornima Kaniarasu, Aaron Steinfeld","doi":"10.1109/ROMAN.2014.6926359","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926359","url":null,"abstract":"Trust in automation is a crucial ingredient for successful human robot interaction. Both human related and robot related factors influence the user's trust on the robot and it is challenging to characterize each of these factors and study how they affect human trust. In this study we try to understand how blame attribution after an error impacts user trust. Three different robot personalities were implemented, each assigning blame to either of the user, the robot itself, or the human-robot team. Our study results confirm that blame attribution impacts human trust in robots.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126549181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Realistic 3D simulation of multiple human recognition over Perception Sensor Network","authors":"JiGwan Park, Kijin An, Jong-suk Choi","doi":"10.1109/ROMAN.2014.6926303","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926303","url":null,"abstract":"In this paper, we introduce a simulation approach for emulating a real-world human recognition system called a Perception Sensor Network (PSN). The proposed PSN system has fusion components for automatic localization and facial recognition that uses multiple Kinect sensors and pan-tilt-zoom (PTZ) cameras. We verified that the generic vision schemes utilized in human detection and facial recognition algorithms were interoperable in a scenario that consisted of virtual avatar humans. In experiments, we tested a perception scenario in which multiple humans in a 3D simulation space are automatically recognized in both location and identification with real-world system parameters.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127927010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rafael Ramón Vigo, Noé Pérez-Higueras, F. Caballero, L. Merino
{"title":"Transferring human navigation behaviors into a robot local planner","authors":"Rafael Ramón Vigo, Noé Pérez-Higueras, F. Caballero, L. Merino","doi":"10.1109/ROMAN.2014.6926347","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926347","url":null,"abstract":"Robot navigation in human environments is an active research area that poses serious challenges. Among them, social navigation and human-awareness has gain lot of attention in the last years due to its important role in human safety and robot acceptance. Learning has been proposed as a more principled way of estimating the insights of human social interactions. In this paper, inverse reinforcement learning is analyzed as a tool to transfer the typical human navigation behavior to the robot local navigation planner. Observations of real human motion interactions found in one publicly available datasets are employed to learn a cost function, which is then used to determine a navigation controller. The paper presents an analysis of the performance of the controller behavior in two different scenarios interacting with persons, and a comparison of this approach with a Proxemics-based method.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"817 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hoang-Long Cao, G. Perre, Ramona Simut, C. Pop, Andreea Peca, D. Lefeber, B. Vanderborght
{"title":"Enhancing My Keepon robot: A simple and low-cost solution for robot platform in Human-Robot Interaction studies","authors":"Hoang-Long Cao, G. Perre, Ramona Simut, C. Pop, Andreea Peca, D. Lefeber, B. Vanderborght","doi":"10.1109/ROMAN.2014.6926311","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926311","url":null,"abstract":"Many robots capable of performing social behaviors have recently been developed for Human-Robot Interaction (HRI) studies. These social robots are applied in various domains such as education, entertainment, medicine, and collaboration. Besides the undisputed advantages, a major difficulty in HRI studies with social robots is that the robot platforms are typically expensive and/or not open-source. It burdens researchers to broaden experiments to a larger scale or apply study results in practice. This paper describes a method to modify My Keepon, a toy version of Keepon robot, to be a programmable platform for HRI studies, especially for robot-assisted therapies. With an Arduino microcontroller board and an open-source Microsoft Visual C# software, users are able to fully control the sounds and motions of My Keepon, and configure the robot to the needs of their research. Peripherals can be added for advanced studies (e.g., mouse, keyboard, buttons, PlayStation2 console, Emotiv neuroheadset, Kinect). Our psychological experiment results show that My Keepon modification is a useful and low-cost platform for several HRI studies.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127600855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The autonomy levels and the human intervention levels of robots: The impact of robot types in human-robot interaction","authors":"Jung-Ju Choi, Yunkyung Kim, Sonya S. Kwak","doi":"10.1109/ROMAN.2014.6926394","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926394","url":null,"abstract":"The objective of this study is to examine the effect of the robot types on emotional engagement with robots. Robots are classified into an autonomous robot and a tele-operated robot according to the levels of autonomy. On the contrary, robots could be distinguished depending on the levels of human intervention required for controlling a robot. An autonomous robot performs task by itself while a tele-operated robot requires an operator's help in task-oriented activity. In emotional communication, an autonomous robot expresses robotic emotions by itself whereas a tele-operated robot delivers an operator's emotions to a receiver. In this study, we compared the impact of the two robot types on perceived intelligence and social presence of robots. We executed a 2 (robot types: an autonomous robot vs. a tele-operated robot) within-participants experiment (N=36). Participants had an interview with either autonomous robot interviewers or tele-operated robot interviewers. They evaluated autonomous robots as more intelligent than tele-operated robots while they felt more social presence toward tele-operated robots than autonomous robots. Implications for design of social robots to increase humans' emotional engagement with robots are discussed.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133610494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Àlex Barco, J. Albó-Canals, Carles Garriga, Xavier Vilasís-Cardona, Laura Callejón, M. Turón, Claudia Gómez, A. López-Sala
{"title":"A drop-out rate in a long-term cognitive rehabilitation program through robotics aimed at children with TBI","authors":"Àlex Barco, J. Albó-Canals, Carles Garriga, Xavier Vilasís-Cardona, Laura Callejón, M. Turón, Claudia Gómez, A. López-Sala","doi":"10.1109/ROMAN.2014.6926251","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926251","url":null,"abstract":"This paper describes in detail the robot platform used in a rehabilitation program for children with Traumatic Brain Injury (TBI) under a project to compare a rehabilitation program through robotics (1) with a conventional rehabilitation program directed to parents (2) and a control group where no specific intervention is done (3). As LEGO ® has been demonstrated as a useful robotic tool able to enhance children motivation, we have used it attached to an iPod which includes several activities defined by neuropsychologists and customized for each patient. In this paper we present the hardware and the software of this robotic platform and also the description of the activities that have already been proposed to patients. We present results about the use of the robot showing that the drop-out rate is lower in the intervention group with robots than in the program directed to parents.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130389148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gordon Briggs, Bryce Gessell, Matt Dunlap, Matthias Scheutz
{"title":"Actions speak louder than looks: Does robot appearance affect human reactions to robot protest and distress?","authors":"Gordon Briggs, Bryce Gessell, Matt Dunlap, Matthias Scheutz","doi":"10.1109/ROMAN.2014.6926402","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926402","url":null,"abstract":"People will eventually be exposed to robotic agents that may protest their commands for a wide range of reasons. We present an experiment designed to determine whether a robot's appearance has a significant effect on the amount of agency people ascribed to it and its ability to dissuade a human operator from forcing it to carry out a specific command. Participants engage in a human-robot interaction (HRI) with either a small humanoid or non-humanoid robot that verbally protests a command. Initial results indicate that humanoid appearance does not significantly affect the behavior of human operators in the task. Agency ratings given to the robots were also not significantly affected.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134258053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Coordinating turn-taking and talking in multi-party conversations by controlling robot's eye-gaze","authors":"Ryo Sato, Yugo Takeuchi","doi":"10.1109/ROMAN.2014.6926266","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926266","url":null,"abstract":"In this study, we suggest a method to coordinate turn-taking and talking in multi-party conversations by the gaze of a robot that participates on the side. Also, we use the experimental paradigm named “Cooperative Turn-taking Game in Non-verbal Situation”, which is a simplified multi-party conversation environment. We investigated whether designing eye-gazes for such robots can coordinate turn-taking and talking in multi-party conversations, and we found the robot's gaze could coordinate turn-taking and talking in multi-party conversations. Our study is expected to effectively encourage desirable talking in such multi-party conversations as collaborative learning scenes.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132184511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A probabilistic approach for human everyday activities recognition using body motion from RGB-D images","authors":"D. Faria, C. Premebida, U. Nunes","doi":"10.1109/ROMAN.2014.6926340","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926340","url":null,"abstract":"In this work, we propose an approach that relies on cues from depth perception from RGB-D images, where features related to human body motion (3D skeleton features) are used on multiple learning classifiers in order to recognize human activities on a benchmark dataset. A Dynamic Bayesian Mixture Model (DBMM) is designed to combine multiple classifier likelihoods into a single form, assigning weights (by an uncertainty measure) to counterbalance the likelihoods as a posterior probability. Temporal information is incorporated in the DBMM by means of prior probabilities, taking into consideration previous probabilistic inference to reinforce current-frame classification. The publicly available Cornell Activity Dataset [1] with 12 different human activities was used to evaluate the proposed approach. Reported results on testing dataset show that our approach overcomes state of the art methods in terms of precision, recall and overall accuracy. The developed work allows the use of activities classification for applications where the human behaviour recognition is important, such as human-robot interaction, assisted living for elderly care, among others.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128017803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}