Hakim Guedjou, S. Boucenna, J. Xavier, D. Cohen, M. Chetouani
{"title":"The influence of individual social traits on robot learning in a human-robot interaction","authors":"Hakim Guedjou, S. Boucenna, J. Xavier, D. Cohen, M. Chetouani","doi":"10.1109/ROMAN.2017.8172311","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172311","url":null,"abstract":"Interactive Machine Learning considers that a robot is learning with and/or from a human. In this paper, we investigate the impact of human social traits on the robot learning. We explore social traits such as age (children vs. adult) and pathology (typical developing children vs. children with autistic spectrum disorders). In particular, we consider learning to recognize both postures and identity of a human partner. A human-robot posture imitation learning, based on a neural network architecture, is used to develop a multi-task learning framework. This architecture exploits three learning levels : 1) visual feature representation, 2) posture classification and 3) human partner identification. During the experiment the robot interacts with children with autism spectrum disorders (ASD), typical developing children (TD) and healthy adults. Previous works assessed the impact on learning of these social traits at the group level. In this paper, we focus on the analysis of individuals separately. The results show that the robot is impacted by the social traits of these different groups' individuals. First, the architecture needs to learn more visual features when interacting with a child with ASD (compared to a TD child) or with a TD child (compared to an adult). However, this surplus in the number of neurons helped the robot to improve the TD children's posture recognition but not that of children with ASD. Second, preliminary results show that this need of a neurons surplus while interacting with children with ASD is also generalizable to the identity recognition task.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"81 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129910271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing the social criteria for human-robot collaborative navigation: A comparison of human-aware navigation planners","authors":"Harmish Khambhaita, R. Alami","doi":"10.1109/ROMAN.2017.8172447","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172447","url":null,"abstract":"This paper focuses on requirements for effective human robot collaboration in interactive navigation scenarios. We designed several use-cases where humans and robot had to move in the same environment that resemble canonical path-crossing situations. These use-cases include open as well as constrained spaces. Three different state-of-the-art humanaware navigation planners were used for planning the robot paths during all selected use-cases. We compare results of simulation experiments with these human-aware planners in terms of quality of generated trajectories together with discussion on capabilities and limitations of the planners. The results show that the human-robot collaborative planner [1] performs better in everyday path-crossing configurations. This suggests that the criteria used by the human-robot collaborative planner (safety, time-to-collision, directional-costs) are possible good measures for designing acceptable human-aware navigation planners. Consequently, we analyze the effects of these social criteria and draw perspectives on future evolution of human-aware navigation planning methods.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134641427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Strategies and mechanisms to enable dialogue agents to respond appropriately to indirect speech acts","authors":"Gordon Briggs, Matthias Scheutz","doi":"10.1109/ROMAN.2017.8172321","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172321","url":null,"abstract":"Humans often use indirect speech acts (ISAs) when issuing directives. Much of the work in handling ISAs in computational dialogue architectures has focused on correctly identifying and handling the underlying non-literal meaning. There has been less attention devoted to how linguistic responses to ISAs might differ from those given to literal directives and how to enable different response forms in these computational dialogue systems. In this paper, we present ongoing work toward developing dialogue mechanisms within a cognitive, robotic architecture that enables a richer set of response strategies to non-literal directives.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114994599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interoperability in cloud robotics — Developing and matching knowledge information models for heterogenous multi-robot systems","authors":"J. Quintas, P. Menezes, J. Dias","doi":"10.1109/ROMAN.2017.8172471","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172471","url":null,"abstract":"Every file, document, database and digital information is now going through the Cloud. Leveraged by the developments in information systems, Cloud Robotics is evolving at a steady pace and raised attention in the past 5 years. This recent field of Robotics is allowing engineers to envisage new and exciting applications for robots in the near future. This work proposes Cloud Robotics as a mean to integrate semantic reasoning in a multi-robot system, using self-created knowledge bases in each robot, in order to perform the coordination of complex task allocation. An auction-based coordination method and a knowledge matching algorithm were implemented to study this subject. The obtained results demonstrated that, the coordination of a large multi-robot system and the knowledge matching process can be computationally demanding, thus making them perfect candidate features to be “cloudyfied”.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115681957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of embodied training on object recognition","authors":"P. Narayanan, M. Bugajska, W. Lawson, J. Trafton","doi":"10.1109/ROMAN.2017.8172478","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172478","url":null,"abstract":"The ability to perform robust, precise, real-time visual recognition is extremely critical for the use of robotic systems in real-world applications. This paper explores the use of Convolution Neural Networks (CNN) and human assisted training in teaching a robot to recognize novel objects. We investigated the impact of providing instructions to a human teacher during a training scenario for novel objects. Participants in the naïve condition were provided verbal instructions by the robot, and participants in the embodied condition were provided embodied demonstrations by the robot. The results showed that a vision system trained by participants with embodied instructions clearly outperformed a system trained by naïve participants. The latest computer vision techniques combined with human assisted teaching was found to provide excellent results for novel object recognition.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124431752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two deep approaches for ADL recognition: A multi-scale LSTM and a CNN-LSTM with a 3D matrix skeleton representation","authors":"Giovanni Ercolano, D. Riccio, Silvia Rossi","doi":"10.1109/ROMAN.2017.8172406","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172406","url":null,"abstract":"In this work, we propose a deep learning approach for the detection of the activities of daily living (ADL) in a home environment starting from the skeleton data of an RGB-D camera. In this context, the combination of ad hoc features extraction/selection algorithms with supervised classification approaches has reached an excellent classification performance in the literature. Since the recurrent neural networks (RNNs) can learn temporal dependencies from instances with a periodic pattern, we propose two deep learning architectures based on Long Short-Term Memory (LSTM) networks. The first (MT-LSTM) combines three LSTMs deployed to learn different time-scale dependencies from pre-processed skeleton data. The second (CNN-LSTM) exploits the use of a Convolutional Neural Network (CNN) to automatically extract features by the correlation of the limbs in a skeleton 3D-grid representation. These models are tested on the CAD-60 dataset. Results show that the CNN-LSTM model outperforms the state-of-the-art performance with 95.4% of precision and 94.4% of recall.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116653901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A robot at home — How affect, technology commitment, and personality traits influence user experience in an intelligent robotics apartment","authors":"Jasmin Bernotat, F. Eyssel","doi":"10.1109/ROMAN.2017.8172370","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172370","url":null,"abstract":"Previous research has shown that user features like affect, personality traits, user gender, technology commitment, perceived ease of technology use, and the feeling of being observed impact human-technology interaction (e.g., [1], [2]). To date, most studies have focused on the influence of user characteristics while interacting with single technical devices such as smart phones, audio players (e.g., [3]), or computers (e.g., [1]). To extend this work, we investigated the influence of individual user characteristics, the perceived ease of task completion, and the feeling of being observed on human-technology interaction and human-robot interaction (HRI) in particular. We explored how participants would solve seven tasks within a smart laboratory apartment. To do so, we collected video data and complemented this analysis with survey data to investigate naïve users' attitudes towards the smart home and the robot. User characteristics such as agreeableness, low negative affect, technology acceptance, low perceived competence regarding technology use, and the perceived ease of task were predictors of positive user experiences within the intelligent robotics apartment. Regression analyses revealed that a positive evaluation of the robot was predicted by positive affect and, to a lesser extent, by technology acceptance. Actual interactions with the robot were predicted by a positive evaluation of the robot and, to a lesser degree, by technology acceptance. Moreover, our findings show that user characteristics and, by tendency, the ease of task impact HRI within an intelligent apartment. Implications for future research on how to investigate the interplay of user and further task characteristics to improve HRI are discussed.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127325519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Playing the mirror game with a humanoid: Probing the social aspects of switching interaction roles","authors":"Shelly Sicat, Shreya Chopra, Nico Li, E. Sharlin","doi":"10.1109/ROMAN.2017.8172437","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172437","url":null,"abstract":"Individuals can easily change interaction roles during everyday tasks, for example by shifting from following someone's lead, to leading the task themselves. We are interested in how these existing social experiences scale to human-robot interaction (HRI). How would robots change their interaction roles when working with people? Would changes in interaction roles pose a challenge unique to robots? In this paper, we propose a testbed for changing interaction roles in HRI based on a drama exercise known as the Mirror Game. The Mirror Game enables close collaboration between two individuals with each closely following the other's movements. Utilizing the Mirror Game with a large humanoid robot allowed us to examine people's reactions to changes in the humanoid interaction roles. We contribute: 1) the design of a human-robot interaction role-switching testbed based on the Mirror Game 2) a prototype of our testbed realized with Rethink Robotic's humanoid, Baxter, and 3) the results of a preliminary study examining people's reactions to the robot changing interaction roles to verify the design of the testbed.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127546248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victor Paléologue, Jocelyn Martin, A. Pandey, Alexandre Coninx, M. Chetouani
{"title":"Semantic-based interaction for teaching robot behavior compositions","authors":"Victor Paléologue, Jocelyn Martin, A. Pandey, Alexandre Coninx, M. Chetouani","doi":"10.1109/ROMAN.2017.8172279","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172279","url":null,"abstract":"Allowing humans to teach robot behaviors will facilitate acceptability as well as long-term interactions. Humans would mainly use speech to transfer knowledge or to teach highlevel behaviors. In this paper, we propose a proof-of-concept application allowing a Pepper robot to learn behaviors from their natural-language-based description, provided by naive human users. In our model, natural language input is provided by grammar-free speech recognition, and is then processed to produce semantic knowledge, grounded in language and primitive behaviors. The same semantic knowledge is used to represent any kind of perceived input as well as actions the robot can perform. The experiment shows that the system can work independently from the domain of application, but also that it has limitations. Progress in semantic extraction, behavior planning and interaction scenario could stretch these limits.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125903847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proposal of non-rotating joint drive type high output power assist suit for squat lifting","authors":"Shun Mohri, H. Inose, Hirokazu Arakawa, Kazuya Yokoyama, Yasuyuki Yamada, Isao Kikutani, Taro Nakamura","doi":"10.1109/ROMAN.2017.8172460","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172460","url":null,"abstract":"Lower back pain is a major health concern worldwide. One cause of lower back pain is the burden on the lumbar region caused by the handling of heavy objects. To reduce this burden, the Ministry of Health, Labour and Welfare in Japan has recommended “squat lifting.” However, this technique, which supports a large force on lower limbs, is not very popular. Therefore, we aimed to develop a power assist suit for squat lifting. In this paper, we propose a gastrocnemius-reinforcing mechanism. Next, we discuss estimation of joint torque from motion analysis of squat lifting in order to construct a prototype. Finally, we describe the performance of the prototype mounted on a human body. The %MVC of the gastrocnemius while performing squat lifting was reduced by 40% using the prototype assist suit compared with the value without using the suit.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"591 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123741008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}