{"title":"A robust multimodal fusion framework for command interpretation in human-robot cooperation","authors":"Jonathan Cacace, Alberto Finzi, V. Lippiello","doi":"10.1109/ROMAN.2017.8172329","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172329","url":null,"abstract":"We present a novel multimodal interaction framework supporting robust human-robot communication. We consider a scenario where a human operator can exploit multiple communication channels to interact with one or more robots in order to accomplish shared tasks. Moreover, we assume that the human is not fully dedicated to the robot control, but also involved in other activities, hence only able to interact with the robotic system in a sparse and incomplete manner. In this context, several human or environmental factors could cause errors, noise and wrong interpretations of the commands. The main goal of this work is to improve the robustness of humanrobot interaction systems in similar situations. In particular, we propose a multimodal fusion method based on the following steps: for each communication channel, unimodal classifiers are firstly deployed in order to generate unimodal interpretations of the human inputs; the unimodal outcomes are then grouped into different multimodal recognition lines, each representing a possible interpretation of a sequence of multimodal inputs; these lines are finally assessed in order to recognize the human commands. We discuss the system at work in a case study in which a human rescuer interacts with a team of flying robots during Search & Rescue missions. In this scenario, we present and discuss real world experiments to demonstrate the effectiveness of the proposed framework.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114588251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Entropy-based eye-tracking analysis when a user watches a PRVA's recommendations","authors":"T. Matsui, S. Yamada","doi":"10.1109/ROMAN.2017.8172275","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172275","url":null,"abstract":"We conducted three experiments to discover the effect of a virtual agent's state transition on a user's eye gaze. Many previous studies showed that an agent's state transition affects a user's state. We focused on two kinds of transitions, the internal state transition and appearance state transition. In this research, we used a product recommendation virtual agent (PRVA) and aimed to discover the effect of its state transitions on users' eye gaze as it made recommendations. We used entropy-based analysis to visualise the deviation of a user's fixations. In experiment 1, the PRVA made recommendations without state transitions. In experiment 2, the amount of the PRVA's knowledge transitioned from low to high during the recommendations. This is an internal state transition. In experiment 3, the PRVA's facial expressions and gestures transitioned from a neutral to positive emotion during the recommendations. This is an appearance state transition. As a result, both the entropy-based analysis and fixation duration based analysis showed significant differences in experiment 3. These results show that an agent's appearance state transitions cause a user's eye gaze to transition.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"94 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125977506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gesture recognition for humanoid robot teleoperation","authors":"Insaf Ajili, M. Mallem, Jean-Yves Didier","doi":"10.1109/ROMAN.2017.8172443","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172443","url":null,"abstract":"Interactive robotics is a vast and expanding research field. Interactions must be sufficiently natural, with robots having socially acceptable behavior by Humans, adaptable to user expectations. Thus allowing easy integration in our daily lives in various fields (science, industry, health, etc). Natural interaction during Human-Robot collaborative action needs suitable interaction techniques. In our paper we develop a gesture recognition system for natural and intuitive communication between Human and NAO robot. However recognizing meaningful gesture patterns from whole-body gestures is a complex task. That is why we used the Laban Movement Analysis technique to describe high level gestures for NAO tele-operation. The major contributions of the present work is: (1) an efficient preprocessing step based on view invariant Human motion, (2) a robust descriptor vector based on Laban Movement Analysis technique in order to generate compact and informative representations of Human movement, and (3) a gesture recognition system based on Hidden Markov Model method was applied to teleoperate NAO based on our proper database dedicated to the tele-operation of NAO. Our approach was evaluated with two challenging datasets, Microsoft Research Cambridge-12 (MSRC-12) and UTKinect-Action. Experimental results show that our approach outperforms the state-of-the-art methods.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121912262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Ting, D. Voilmy, Ana Iglesias, J. C. Pulido, Javier García, A. Romero-Garcés, J. Rubio, R. Marfil, Alvaro Dueñas-Ruiz
{"title":"Integrating the users in the design of a robot for making Comprehensive Geriatric Assessments (CGA) to elderly people in care centers","authors":"K. Ting, D. Voilmy, Ana Iglesias, J. C. Pulido, Javier García, A. Romero-Garcés, J. Rubio, R. Marfil, Alvaro Dueñas-Ruiz","doi":"10.1109/ROMAN.2017.8172346","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172346","url":null,"abstract":"Comprehensive Geriatric Assessment (CGA) is a multidimensional and multidisciplinary diagnostic instrument that helps provide personalized care to the elderly, by evaluating their physical and mental state. In a social and economic context of growing ageing populations, medical experts can save time and effort if provided with interactive tools to efficiently assist them in doing CGAs, managing standardized tests or data collection. Recent research proposes the use of social robots as the central part of these tools. These robots must be able to unfold all functionalities that questionnaires or motion-based tests require, including natural language, face tracking and monitoring, human motion capture and so on. But another issue is the robot's acceptability and trust by the end-users, both patients (elderly people) and clinicians: the robot needs to be able to engage with the patients during the interaction sessions, and must be perceived as a useful and efficient tool by the clinicians. This paper presents the acquisition of new user requirements for CLARC, through participatory and user-centered design approach, to inform the improvement of both interface and interaction. Thirty eight persons (elderly people, caregivers and health professionals) were involved in the design process of CLARC, based on user-centered methods and techniques of Human-Computer Interaction discipline.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122639216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chiara Piezzo, Bruno Leme, Masakazu Hirokawa, Kenji Suzuki
{"title":"Gait measurement by a mobile humanoid robot as a walking trainer","authors":"Chiara Piezzo, Bruno Leme, Masakazu Hirokawa, Kenji Suzuki","doi":"10.1109/ROMAN.2017.8172438","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172438","url":null,"abstract":"It is well-known that walking offers many health benefits for everyone, especially for older people who need to maintain mobility and independence coping with declining of functional capacity. In this paper, we present the design of a humanoid walking trainer that has to monitor and encourage walking in the elderly. This design is based on our target users' preferences. We present as well a preliminary walking experiment that was carried out in order to test the accuracy of the gait data obtained from the laser range sensor, which is positioned on the robot, during motion.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"389 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122128905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning users' and personality-gender preferences in close human-robot interaction","authors":"Arturo Cruz-Maya, A. Tapus","doi":"10.1109/ROMAN.2017.8172393","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172393","url":null,"abstract":"Robots are expected to interact with persons in their everyday activities and should learn the preferences of their users in order to deliver a more natural interaction. Having a memory system that remembers past events and using them to generate an adapted robot's behavior is a useful feature that robots should have. Nevertheless, robots will have to face unknown situations and behave appropriately. We propose the usage of user's personality (introversion/extroversion) to create a model to predict user's preferences so as to be used when there are no past interactions for a certain robot's task. For this, we propose a framework that combines an Emotion System based on the OCC Model with an Episodic-Like Memory System. We did an experiment where a group of participants customized robot's behavior with respect to their preferences (personal distance, gesture amplitude, gesture speed). We tested the obtained model against preset behaviors based on the literature about extroversion preferences on interaction. For this, a different group of participants was recruited. Results shows that our proposed model generated a behavior that was more preferred by the participants than the preset behaviors. Only the group of introvert-female participants did not present any significant difference between the different behaviors.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131364508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beatrice Alenljung, Rebecca Andreasson, E. Billing, J. Lindblom, Robert J. Lowe
{"title":"User experience of conveying emotions by touch","authors":"Beatrice Alenljung, Rebecca Andreasson, E. Billing, J. Lindblom, Robert J. Lowe","doi":"10.1109/ROMAN.2017.8172463","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172463","url":null,"abstract":"In the present study, 64 users were asked to convey eight distinct emotion to a humanoid Nao robot via touch, and were then asked to evaluate their experiences of performing that task. Large differences between emotions were revealed. Users perceived conveying of positive/pro-social emotions as significantly easier than negative emotions, with love and disgust as the two extremes. When asked whether they would act differently towards a human, compared to the robot, the users' replies varied. A content analysis of interviews revealed a generally positive user experience (UX) while interacting with the robot, but users also found the task challenging in several ways. Three major themes with impact on the UX emerged; responsiveness, robustness, and trickiness. The results are discussed in relation to a study of human-human affective tactile interaction, with implications for human-robot interaction (HRI) and design of social and affective robotics in particular.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126440764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of a robot that is capable of high fiving with humans","authors":"Erina Okamura, F. Tanaka","doi":"10.1109/ROMAN.2017.8172380","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172380","url":null,"abstract":"High fiving enhances communication in human society. Therefore, a robot that is capable of high fiving could build a better relationship with humans. To design such a robot, it is necessary to determine the requirements of robotic high fives. The goal of this paper is to present such requirements that were identified from the analysis of human high fives, and to show the actual implementations on a humanoid robot. The process of high fiving is composed of two phases: people determine a high five motion according to the current occasion, and then they adjust the motion according to the situation surrounding them. In this paper, we particularly report these motion adjustment functions, which were tested with human participants. Feedback and other requirements for an effective robotic high five are reported.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125681773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Wintersberger, Anna-Katharina Frison, A. Riener, Shailie Thakkar
{"title":"Do moral robots always fail? Investigating human attitudes towards ethical decisions of automated systems","authors":"Philipp Wintersberger, Anna-Katharina Frison, A. Riener, Shailie Thakkar","doi":"10.1109/ROMAN.2017.8172493","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172493","url":null,"abstract":"Technological advances will soon make it possible for automated systems (such as vehicles or search and rescue drones) to take over tasks that have been performed by humans. Still, it will be humans that interact with these systems — relying on the system ('s decisions) will require trust in the robot/machine and its algorithms. Trust research has a long history. One dimension of trust, ethical or morally acceptable decisions, has not received much attention so far. Humans are continuously faced with ethical decisions, reached based on a personal value system and intuition. In order for people to be able to trust a system, it must have widely accepted ethical capabilities. Although some studies indicate that people prefer utilitarian decisions in critical situations, e.g. when a decision requires to favor one person over another, this approach would violate laws and international human rights as individuals must not be ranked or classified by personal characteristics. One solution to this dilemma would be to make decisions by chance — but what about acceptance by system users? To find out if randomized decisions are accepted by humans in morally ambiguous situations, we conducted an online survey where subjects had to rate their personal attitudes toward decisions of moral algorithms in different scenarios. Our results (n=330) show that, despite slightly more respondents state preferring decisions based on ethical rules, randomization is perceived to be most just and morally right and thus may drive decisions in case other objective parameters equate.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131856645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robots educate in style: The effect of context and non-verbal behaviour on children's perceptions of warmth and competence","authors":"Rifca Peters, J. Broekens, Mark Antonius Neerincx","doi":"10.1109/ROMAN.2017.8172341","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172341","url":null,"abstract":"Social robots are entering the private and public domain where they engage in social interactions with nontechnical users. This requires robots to be socially interactive and intelligent, including the ability to display appropriate social behaviour. Progress has been made in emotion modelling. However, research into behaviour style is less thorough; no comprehensive, validated model exists of non-verbal behaviours to express style in human-robot interactions. Based on a literature survey, we created a model of non-verbal behaviour to express high/low warmth and competence — two dimensions that contribute to teaching style. In a perception study, we evaluated this model applied to a NAO robot giving a lecture at primary schools and a diabetes camp in the Netherlands. For this, we developed, based on expert ratings, an instrument measuring perceived warmth, competence, dominance and affiliation. We show that even subtle manipulations of robot behaviour influence children's perceptions of the robot's level of warmth and competence.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130452858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}