S. Lemaignan, S. Cooper, Raquel Ros, L. Ferrini, Antonio Andriella, Aina Irisarri
{"title":"Open-source Natural Language Processing on the PAL Robotics ARI Social Robot","authors":"S. Lemaignan, S. Cooper, Raquel Ros, L. Ferrini, Antonio Andriella, Aina Irisarri","doi":"10.1145/3568294.3580041","DOIUrl":"https://doi.org/10.1145/3568294.3580041","url":null,"abstract":"We demonstrate how state-of-art open-source tools for automatic speech recognition (vosk) and dialogue management (rasa) can be integrated on a social robotic platform (PAL Robotics' ARI robot) to provide rich verbal interactions. Our open-source, ROS-based pipeline implements the ROS4HRI standard, and the demonstration specifically presents the details of the integration, in a way that will enable attendees to replicate it on their robots. The demonstration takes place in the context of assistive robotics and robots for elderly care, two application domains with unique interaction challenges, for which, the ARI robot has been designed and extensively tested in real-world settings.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"89 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82352478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Utilizing Prior Knowledge to Improve Automatic Speech Recognition in Human-Robot Interactive Scenarios","authors":"Pradip Pramanick, Chayan Sarkar","doi":"10.1145/3568294.3580129","DOIUrl":"https://doi.org/10.1145/3568294.3580129","url":null,"abstract":"The prolificacy of human-robot interaction not only depends on a robot's ability to understand the intent and content of the human utterance but also gets impacted by the automatic speech recognition (ASR) system. Modern ASR can provide highly accurate (grammatically and syntactically) translation. Yet, the general purpose ASR often misses out on the semantics of the translation by incorrect word prediction due to open-vocabulary modeling. ASR inaccuracy can have significant repercussions as this can lead to a completely different action by the robot in the real world. Can any prior knowledge be helpful in such a scenario? In this work, we explore how prior knowledge can be utilized in ASR decoding. Using our experiments, we demonstrate how our system can significantly improve ASR translation for robotic task instruction.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"58 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84891050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christine T. Chang, Mitchell Hebert, Bradley Hayes
{"title":"Collaborative Planning and Negotiation in Human-Robot Teams","authors":"Christine T. Chang, Mitchell Hebert, Bradley Hayes","doi":"10.1145/3568294.3579978","DOIUrl":"https://doi.org/10.1145/3568294.3579978","url":null,"abstract":"Our work aims to apply iterative communication techniques to improve functionality of human-robot teams working in space and other high-risk environments. Forms of iterative communication include progressive incorporation of human preference and otherwise latent task specifications. Our prior work found that humans would choose not to comply with robot-provided instructions and then proceed to self-justify their choices despite the risks of physical harm and blatant disregard for rules. Results clearly showed that humans working near robots are willing to sacrifice safety for efficiency. Current work aims to improve communication by iteratively incorporating human preference into optimized path planning for human-robot teams operating over large areas. Future work will explore the extent to which negotiation can be used as a mechanism for improving task planning and joint task execution for humans and robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"49 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84917363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jake Brawer, Debasmita Ghose, Kate Candon, Meiying Qin, A. Roncone, Marynel Vázquez, B. Scassellati
{"title":"Interactive Policy Shaping for Human-Robot Collaboration with Transparent Matrix Overlays","authors":"Jake Brawer, Debasmita Ghose, Kate Candon, Meiying Qin, A. Roncone, Marynel Vázquez, B. Scassellati","doi":"10.1145/3568162.3576983","DOIUrl":"https://doi.org/10.1145/3568162.3576983","url":null,"abstract":"One important aspect of effective human--robot collaborations is the ability for robots to adapt quickly to the needs of humans. While techniques like deep reinforcement learning have demonstrated success as sophisticated tools for learning robot policies, the fluency of human-robot collaborations is often limited by these policies' inability to integrate changes to a user's preferences for the task. To address these shortcomings, we propose a novel approach that can modify learned policies at execution time via symbolic if-this-then-that rules corresponding to a modular and superimposable set of low-level constraints on the robot's policy. These rules, which we call Transparent Matrix Overlays, function not only as succinct and explainable descriptions of the robot's current strategy but also as an interface by which a human collaborator can easily alter a robot's policy via verbal commands. We demonstrate the efficacy of this approach on a series of proof-of-concept cooking tasks performed in simulation and on a physical robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"30 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78928547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Tozadore, Jauwairia Nasir, Sarah Gillet, Rianne van den Berghe, Arzu Guneysu, W. Johal
{"title":"Robots for Learning 7 (R4L): A Look from Stakeholders' Perspective","authors":"D. Tozadore, Jauwairia Nasir, Sarah Gillet, Rianne van den Berghe, Arzu Guneysu, W. Johal","doi":"10.1145/3568294.3579958","DOIUrl":"https://doi.org/10.1145/3568294.3579958","url":null,"abstract":"This year's conference theme \"HRI for all\" not just raises the importance of reflecting on how to promote inclusion for every type of user but also calls for careful consideration of the different layers of people potentially impacted by such systems. In educational setups, for instance, the users to be considered first and foremost are the learners. However, teachers, school directors, therapists and parents also form a more secondary layer of users in this ecosystem. The 7th edition of R4L focuses on the issues that HRI experiments in educational environments may cause to stakeholders and how we could improve on bringing the stakeholders' point of view into the loop. This goal is expected to be achieved in a very practical and dynamic way by the means of: (i) lightening talks from the participants; (ii) two discussion panels with special guests: One with active researchers from academia and industry about their experience and point of view regarding the inclusion of stakeholders; another panel with teacher, school directors, and parents that are/were involved in HRI experiments and will share their viewpoint; (iii) semi-structured group discussions and hands-on activities with participants and panellists to evaluate and propose guidelines for good practices regarding how to promote the inclusion of stakeholders, especially teachers, in educational HRI activities. By acquiring the viewpoint from the experimenters and stakeholders and analysing them in the same workshop, we expect to identify current gaps, propose practical solutions to bridge these gaps, and capitalise on existing synergies with the collective intelligence of the two communities.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"43 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79979988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Language Models for Human-Robot Interaction","authors":"E. Billing, Julia Rosén, M. Lamb","doi":"10.1145/3568294.3580040","DOIUrl":"https://doi.org/10.1145/3568294.3580040","url":null,"abstract":"Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"25 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78011440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robot Theory of Mind with Reverse Psychology","authors":"Chuang Yu, Baris Serhan, M. Romeo, A. Cangelosi","doi":"10.1145/3568294.3580144","DOIUrl":"https://doi.org/10.1145/3568294.3580144","url":null,"abstract":"Theory of mind (ToM) corresponds to the human ability to infer other people's desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who plays a trust-based card game against another human. The robot infers its partner's trust in the robot's decision system via reinforcement learning. Robot ToM refers to the ability to implicitly anticipate the human collaborator's strategy and inject the prediction into its optimal decision model for a better team performance. In our experiments, the robot learns when its human partner does not trust the robot and consequently gives recommendations in its optimal policy to ensure the effectiveness of team performance. The interesting finding is that the optimal robotic policy attempts to use reverse psychology on its human collaborator when trust is low. This finding will provide guidance for the study of a trustworthy robot decision model with a human partner in the loop.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"32 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85125818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lanmiao Liu, Chuang Yu, Siyang Song, Zhidong Su, A. Tapus
{"title":"Human Gesture Recognition with a Flow-based Model for Human Robot Interaction","authors":"Lanmiao Liu, Chuang Yu, Siyang Song, Zhidong Su, A. Tapus","doi":"10.1145/3568294.3580145","DOIUrl":"https://doi.org/10.1145/3568294.3580145","url":null,"abstract":"Human skeleton-based gesture classification plays a dominant role in social robotics. Learning the variety of human skeleton-based gestures can help the robot to continuously interact in an appropriate manner in a natural human-robot interaction (HRI). In this paper, we proposed a Flow-based model to classify human gesture actions with skeletal data. Instead of inferring new human skeleton actions from noisy data using a retrained model, our end-to-end model can expand the diversity of labels for gesture recognition from noisy data without retraining the model. At first, our model focuses on detecting five human gesture actions (i.e., come on, right up, left up, hug, and noise-random action). The accuracy of our online human gesture recognition system is as well as the offline one. Meanwhile, both attain 100% accuracy among the first four actions. Our proposed method is more efficient for inference of new human gesture action without retraining, which acquires about 90% accuracy for noise-random action. The gesture recognition system has been applied to the robot's reaction toward the human gesture, which is promising to facilitate a natural human-robot interaction.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"28 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87611736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sawarimōto","authors":"Aidan Edward Fox-Tierney, Kurima Sakai, Masahiro Shiomi, Takashi Minato, Hiroshi Ishiguro","doi":"10.1145/3568294.3580131","DOIUrl":"https://doi.org/10.1145/3568294.3580131","url":null,"abstract":"Although robot-to-human touch experiments have been performed, they have all used direct tele-operation with a remote controller, pre-programmed hand motions, or tracked the human with wearable trackers. This report introduces a project that aims to visually track and touch a person's face with a humanoid android using a single RGB-D camera for 3D pose estimation. There are three major components: 3D pose estimation, a touch sensor for the android's hand, and a controller that combines the pose and sensor information to direct the android's actions. The pose estimation is working and released under as open-source. A touch sensor glove has been built and we have begun work on creating an under-skin version. Finally, we have tested android face-touch control. These tests showed many hurdles that will need to be overcome, but also how convincing the experience already is for the potential of this technology to elicit strong emotional responses.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"102 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74262536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giulia Scorza Azzarà, Joshua Zonca, F. Rea, Joo-Hyun Song, A. Sciutti
{"title":"Can a Robot's Hand Bias Human Attention?","authors":"Giulia Scorza Azzarà, Joshua Zonca, F. Rea, Joo-Hyun Song, A. Sciutti","doi":"10.1145/3568294.3580074","DOIUrl":"https://doi.org/10.1145/3568294.3580074","url":null,"abstract":"Previous studies have revealed that humans prioritize attention to the space near their hands (the so-called near-hand effect). This effect may also occur towards a human partner's hand, but only after sharing a physical joint action. Hence, in human dyads, interaction leads to a shared body representation that may influence basic attentional mechanisms. Our project investigates whether a collaborative interaction with a robot might similarly influence attention. To this aim, we designed an experiment to assess whether the mere presence of a robot with an anthropomorphic hand could bias the human partner's attention. We replicated a classical psychological paradigm to measure this attentional bias (i.e., the near-hand effect) by adding a robotic condition. Preliminary results found the near-hand effect when performing the task with the self-hand near the screen, leading to shorter reaction times on the same side of the hand. On the contrary, we found no effect on the robot's hand in the absence of previous collaborative interaction with the robot, in line with studies involving human partners.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"12 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74615342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}