{"title":"Evoking an Intentional Stance during Human-Agent Social Interaction: Appearances Can Be Deceiving","authors":"Casey C. Bennett","doi":"10.1109/RO-MAN50785.2021.9515420","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515420","url":null,"abstract":"A critical issue during human-agent and human-robot interaction is eliciting an intentional stance in the human interactor, whereas the human perceives the agent as a fully \"intelligent\" being with full agency towards their own intentions and desires. Eliciting such a stance, however, has proven elusive, despite work in cognitive science, robotics, and human-computer interaction over the past half-century. Here, we argue for a paradigm shift in our approach to this problem, based on a synthesis of recent evidence from social robotics and digital avatars. In short, in order to trigger an intentional stance in humans, perhaps our artificial agents need to adopt one about themselves.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"46 1","pages":"362-368"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89155953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Austin Kothig, J. Muñoz, S. Akgun, A. M. Aroyo, K. Dautenhahn
{"title":"Connecting Humans and Robots Using Physiological Signals – Closing-the-Loop in HRI","authors":"Austin Kothig, J. Muñoz, S. Akgun, A. M. Aroyo, K. Dautenhahn","doi":"10.1109/RO-MAN50785.2021.9515383","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515383","url":null,"abstract":"Technological advancements in creating and commercializing novel unobtrusive and wearable physiological sensors generate new opportunities to develop adaptive human-robot interaction (HRI) scenarios. Detecting complex human states such as engagement and stress when interacting with social agents could bring numerous advantages to create meaningful interactive experiences. Despite being widely used to explain human behaviors in post-interaction analysis with social agents, using bodily signals to create more adaptive and responsive systems remains an open challenge. This paper presents the development of an open-source, integrative, and modular library created to facilitate the design of physiologically adaptive HRI scenarios. The HRI Physio Lib streamlines the acquisition, analysis, and translation of human body signals to additional dimensions of perception in HRI applications using social robots. The software framework has four main components: signal acquisition, processing and analysis, social robot and communication, and scenario and adaptation. Information gathered from the sensors is synchronized and processed to allow designers to create adaptive systems that can respond to detected human states. This paper describes the library and presents a use case that uses a humanoid robot as a cardio-aware exercise coach that uses heartbeats to adapt the exercise intensity to maximize cardiovascular performance. The main challenges, lessons learned, scalability of the library, and implications of the physio-adaptive coach are discussed.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"735-742"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80082655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Basic Study for Acceptance of Robots as Meal Partners: Number of Robots During Mealtime, Frequency of Solitary Eating, and Past Experience with Robots","authors":"Ayaka Fujii, K. Okada, M. Inaba","doi":"10.1109/RO-MAN50785.2021.9515451","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515451","url":null,"abstract":"Due to the recent lifestyle changes, instances of people eating alone have been increasing. We think robots can be good meal partners without having to risk disease transmission. Furthermore, people are able to eat with robots without worrying about mealtimes. In this study, we determine who are more likely to accept robots as eating partners and compare eating with a single robot to eating with multiple robots. The results revealed that people who have vast experience in interacting with robots and those who have relatively few opportunities to eat alone felt better about eating with robots, whereas those who have numerous opportunities to eat alone enjoyed eating with multiple robots.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"73-80"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81228253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wen-Ying Lee, Mose Sakashita, E. Ricci, Houston Claure, François Guimbretière, Malte F. Jung
{"title":"Interactive Vignettes: Enabling Large-Scale Interactive HRI Research","authors":"Wen-Ying Lee, Mose Sakashita, E. Ricci, Houston Claure, François Guimbretière, Malte F. Jung","doi":"10.1109/RO-MAN50785.2021.9515376","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515376","url":null,"abstract":"We propose the use of interactive vignettes as an alternative to traditional text- and video-based vignettes for conducting large-scale Human-Robot Interaction (HRI) studies. Interactive vignettes maintain the advantages of traditional vignettes while offering additional affordances for participant interaction and data collection through interactive elements. We discuss the core affordances of interactive vignettes, including explorability, responsiveness, and non-linearity, and look into how these affordances can enable HRI research with more complex scenarios. To demonstrate the strength of the approach, we present a case study of our own research project with N=87 participants and show the data we collect through interactive vignettes. We suggest that the use of interactive vignettes can benefit HRI researchers in learning how participants interact with, respond to, and perceive a robot’s behavior in pre-defined scenarios.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"96 1","pages":"1289-1296"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77191414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandra Sorrentino, O. Khalid, Luigi Coviello, F. Cavallo, L. Fiorini
{"title":"Modeling human-like robot personalities as a key to foster socially aware navigation *","authors":"Alessandra Sorrentino, O. Khalid, Luigi Coviello, F. Cavallo, L. Fiorini","doi":"10.1109/RO-MAN50785.2021.9515556","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515556","url":null,"abstract":"This work aims to investigate if a \"robot's personality\" can affect the social perception of the robot in the navigation task. To this end, we implemented a dedicated human-aware navigation system that adapts the configuration of the navigation parameters (i.e. proxemics and velocity) based on two different human-like personalities, extrovert (EXT) and introvert (INT), and we compared them with a no social behavior (NS). We evaluated the system in a dynamic scenario in which each participant needed to pass by a robot moving in the opposite direction, showing a different personality each time. The Eysenck Personality Inventory and a modified version of the Godspeed questionnaire were administered to assess the user’s and the perceived robot’s personalities, respectively. The results show that 19 out of 20 subjects involved in the study perceived a difference among the personalities exhibited by the robot, both in terms of proxemics and velocity. Furthermore, the results highlight a general preference of a complementary robot’s personality, helping to suggest some guidelines for future works in the human-aware navigation field.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"132 1","pages":"95-101"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90319221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Out-of-Sight Predictive Tracking for Long-Term Indoor Navigation of Non-Holonomic Person Following Robot*","authors":"A. Ashe","doi":"10.1109/RO-MAN50785.2021.9515348","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515348","url":null,"abstract":"The ability to predict the movements of the target person allows a person following robot (PFR) to coexist with the person while still complying with the social norms. In human-robot collaboration, this is an essential requisite for long-term time-dependent navigation and not losing sight of the person during momentary occlusions that may arise from a crowd due to static or dynamic obstacles, other human beings, or intersections in the local surrounding. The PFR must not only traverse to the previously unknown goal position but also relocate the target person after the miss, and resume following. In this paper, we try to solve this as a coupled motion-planning and control problem by formulating a model predictive control (MPC) controller with non-linear constraints for a wheeled differential-drive robot. And, using a human motion prediction strategy based on the recorded pose and trajectory information of both the moving target person and the PFR, add additional constraints to the same MPC, to recompute the optimal controls to the wheels. We make comparisons with RNNs like LSTM and Early Relocation for learning the best-predicted reference path.MPC is best suited for complex constrained problems because it allows the PFR to periodically update the tracking information, as well as to adapt to the moving person’s stride. We show the results using a simulated indoor environment and lay the foundation for its implementation on a real robot. Our proposed method offers a robust person following behaviour without the explicit need for policy learning or offline computation, allowing us to design a generalized framework.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"31 1","pages":"476-481"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90436817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Khaksar, Margot M. E. Neggers, E. Barakova, J. Tørresen
{"title":"Generation Differences in Perception of the Elderly Care Robot","authors":"W. Khaksar, Margot M. E. Neggers, E. Barakova, J. Tørresen","doi":"10.1109/RO-MAN50785.2021.9515534","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515534","url":null,"abstract":"Introducing robots in healthcare facilities and homes may reduce the workload of healthcare personnel while providing the users with better and more available services. It may also contribute to interactions that are engaging and safe against transmitting contagious diseases for senior adults. A major challenge in this regard is to design and adapt the robot’s behavior based on the requirements and preferences of the different users. In this paper, we report a conducted use study on how people perceive different kinds of robot encounters. We had two groups of target users: one with senior residents at a care center and another with young students at a university, which would be representative for the visitors and care volunteers in the facility. Several common scenarios have been created to evaluate the perception of the robot’s behavior by the participants. Two sets of questionnaires were used to collect feedback on the behavior and the general perception of the users about the robot´s different styles of behavior. An exploratory analysis of the effect of age shows that the age of the targeted user group should be considered as one of the main criteria when designing the social parameters of a care robot, as seniors preferred slower speed and closer distance to the robot. The results can contribute to improving a future robot’s control to better suit users from different generations.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"150 1","pages":"551-558"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86033241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazuya Otake, S. Okamoto, Yasuhiro Akiyama, Yoji Yamada
{"title":"Virtual tactile texture using electrostatic friction display for natural materials: The role of low and high frequency textural stimuli","authors":"Kazuya Otake, S. Okamoto, Yasuhiro Akiyama, Yoji Yamada","doi":"10.1109/RO-MAN50785.2021.9515405","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515405","url":null,"abstract":"As touchscreens have become a standard feature in mobile devices, technologies for presenting tactile texture feedback on the panel have been attracting attention. We tested a new method for presenting natural materials using an electrostatic tactile texture display. In this method, the frictional forces are decomposed into low- and high-frequency components. The low-frequency component was modeled based on Coulomb’s friction law, such that the friction force was reactive to the finger’s normal force. The high-frequency component was modeled using an auto-regressive model to retain its features of frequency spectra. Four natural material types, representing leather, cork, denim, and drawing paper, were presented to six assessors using this method. In a condition where only the low-frequency friction force components were rendered, the materials were correctly recognized at 70%. In contrast, when the high-frequency components were superposed, this rate increased to 80%, although the difference was not statistically significant. Our approach to combine a physical friction model and frequency spectrum for low- and high-frequency components, respectively, allows people to recognize virtual natural materials rendered on touch panels.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"392-397"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91100643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicted information gain and convolutional neural network for prediction of gait periods using a wearable sensors network","authors":"Uriel Martinez-Hernandez, Adrian Rubio-Solis","doi":"10.1109/RO-MAN50785.2021.9515395","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515395","url":null,"abstract":"This work presents a method for recognition of walking activities and prediction of gait periods using wearable sensors. First, a Convolutional Neural Network (CNN) is used to recognise the walking activity and gait period. Second, the output of the CNN is used by a Predicted Information Gain (PIG) method to predict the next most probable gait period while walking. The output of these two processes are combined to adapt the recognition accuracy of the system. This adaptive combination allows us to achieve an optimal recognition accuracy over time. The validation of this work is performed with an array of wearable sensors for the recognition of level-ground walking, ramp ascent and ramp descent, and prediction of gait periods. The results show that the proposed system can achieve accuracies of 100% and 99.9% for recognition of walking activity and gait period, respectively. These results show the benefit of having a system capable of predicting or anticipating the next information or event over time. Overall, this approach offers a method for accurate activity recognition, which is a key process for the development of wearable robots capable of safely assist humans in activities of daily living.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"119 1","pages":"1132-1137"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86123080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding Robots: Making Robots More Legible in Multi-Party Interactions","authors":"Miguel Faria, Francisco S. Melo, A. Paiva","doi":"10.1109/RO-MAN50785.2021.9515485","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515485","url":null,"abstract":"In this work we explore implicit communication between humans and robots—through movement—in multi-party (or multi-user) interactions. In particular, we investigate how a robot can move to better convey its intentions using legible movements in multi-party interactions. Current research on the application of legible movements has focused on single-user interactions, causing a vacuum of knowledge regarding the impact of such movements in multi-party interactions. We propose a novel approach that extends the notion of legible motion to multi-party settings, by considering that legibility depends on all human users involved in the interaction, and should take into consideration how each of them perceives the robot’s movements from their respective points-of-view. We show, through simulation and a user study, that our proposed model of multi-user legibility leads to movements that, on average, optimize the legibility of the motion as perceived by the group of users. Our model creates movements that allow each human to more quickly and confidently understand what are the robot’s intentions, thus creating safer, clearer and more efficient interactions and collaborations.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"52 1","pages":"1031-1036"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90595610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}