Nurziya Oralbayeva, A. Aly, A. Sandygulova, Tony Belpaeme
{"title":"Data-Driven Communicative Behaviour Generation: A Survey","authors":"Nurziya Oralbayeva, A. Aly, A. Sandygulova, Tony Belpaeme","doi":"10.1145/3609235","DOIUrl":"https://doi.org/10.1145/3609235","url":null,"abstract":"The development of data-driven behaviour generating systems has recently become the focus of considerable attention in the fields of human-agent interaction (HAI) and human-robot interaction (HRI). Although rule-based approaches were dominant for years, these proved inflexible and expensive to develop. The difficulty of developing production rules, as well as the need for manual configuration in order to generate artificial behaviours, places a limit on how complex and diverse rule-based behaviours can be. In contrast, actual human-human interaction data collected using tracking and recording devices makes human-like multimodal co-speech behaviour generation possible using machine learning and specifically, in recent years, deep learning. This survey provides an overview of the state-of-the-art of deep learning-based co-speech behaviour generation models and offers an outlook for future research in this area.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"35 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82183664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New Design Potentials of Non-mimetic Sonification in Human-Robot Interaction","authors":"Elias Naphausen, Andreas Muxel, J. Willmann","doi":"10.1145/3611646","DOIUrl":"https://doi.org/10.1145/3611646","url":null,"abstract":"With the increasing use and complexity of robotic devices, the requirements for the design of human-robot interfaces are rapidly changing and call for new means of interaction and information transfer. On that scope, the discussed project – being developed by the Hybrid Things Lab at the University of Applied Sciences Augsburg and the Design Research Lab at Bauhaus-Universität Weimar – takes a first step in characterizing a novel field of research, exploring the design potentials of non-mimetic sonification in the context of human-robot interaction (HRI). Featuring an industrial 7-axis manipulator and collecting multiple information (for instance, the position of the end-effector, joint positions and forces) during manipulation, these data sets are being used for creating a novel augmented audible presence, and thus allowing new forms of interaction. As such, this paper considers (1) research parameters for non-mimetic sonification (such as pitch, volume and timbre);(2) a comprehensive empirical pursuit, including setup, exploration, and validation;(3) the overall implications of integrating these findings into a unifying human-robot interaction process. The relation between machinic and auditory dimensionality is of particular concern.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"40 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76045892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario","authors":"Sooyung Byeon, Joonwon Choi, Yutong Zhang, Inseok Hwang","doi":"10.1145/3603194","DOIUrl":"https://doi.org/10.1145/3603194","url":null,"abstract":"This paper proposes a novel stochastic-skill-level-based shared control framework to assist human novices to emulate human experts in complex dynamic control tasks. The proposed framework aims to infer stochastic-skill-levels (SSLs) of the human novices and provide personalized assistance based on the inferred SSLs. SSL can be assessed as a stochastic variable which denotes the probability that the novice will behave similarly to experts. We propose a data-driven method which can characterize novice demonstrations as a novice model and expert demonstrations as an expert model, respectively. Then, our SSL inference approach utilizes the novice and expert models to assess the SSL of the novices in complex dynamic control tasks. The shared control scheme is designed to dynamically adjust the level of assistance based on the inferred SSL to prevent frustration or tedium during human training due to poorly imposed assistance. The proposed framework is demonstrated by a human subject experiment in a human training scenario for a remotely piloted urban air mobility (UAM) vehicle. The results show that the proposed framework can assess the SSL and tailor the assistance for an individual in real-time. The proposed framework is compared to practice-only training (no assistance) and a baseline shared control approach to test the human learning rates in the designed training scenario with human subjects. A subjective survey is also examined to monitor the user experience of the proposed framework.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"89 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74266351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Introduction to the Special Issue on “Designing the Robot Body: Critical Perspectives on Affective Embodied Interaction”","authors":"M. Paterson, G. Hoffman, C. Zheng","doi":"10.1145/3594713","DOIUrl":"https://doi.org/10.1145/3594713","url":null,"abstract":"A","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"42 1","pages":"1 - 9"},"PeriodicalIF":5.1,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75515966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Affective Corners as a Problematic for Design Interactions","authors":"Katherine M. Harrison, Ericka Johnson","doi":"10.1145/3596452","DOIUrl":"https://doi.org/10.1145/3596452","url":null,"abstract":"Domestic robots are already commonplace in many homes, while humanoid companion robots like Pepper are increasingly becoming part of different kinds of care work. Drawing on fieldwork at a robotics lab, as well as our personal encounters with domestic robots, we use here the metaphor of “hard-to-reach corners” to explore the socio-technical limitations of companion robots and our differing abilities to respond to these limitations. This paper presents “hard-to-reach-corners” as a problematic for design interaction, offering them as an opportunity for thinking about context and intersectional aspects of adaptation.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"125 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73757550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Sound of Swarm. Auditory Description of Swarm Robotic Movements","authors":"Maria Mannone, V. Seidita, A. Chella","doi":"10.1145/3596203","DOIUrl":"https://doi.org/10.1145/3596203","url":null,"abstract":"Movements of robots in a swarm can be mapped to sounds, highlighting the group behavior through the coordinated and simultaneous variations of musical parameters across time. The vice versa is also possible: sound parameters can be mapped to robotic motion parameters, giving instructions through sound. In this article, we first develop a theoretical framework to relate musical parameters such as pitch, timbre, loudness, and articulation (for each time) with robotic parameters such as position, identity, motor status, and sensor status. We propose a definition of musical spaces as Hilbert spaces, and musical paths between parameters as elements of bigroupoids, generalizing existing conceptions of musical spaces. The use of Hilbert spaces allows us to build up quantum representations of musical states, inheriting quantum computing resources, already used for robotic swarms. We present the theoretical framework and then some case studies as toy examples. In particular, we discuss a 2D video and matrix simulation with two robo-caterpillars; a 2D simulation of 10 robo-ants with Webots; a 3D simulation of three robo-fish in an underwater search&rescue mission.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"33 7 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83951315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"It Takes Two: using Co-creation to Facilitate Child-Robot Co-regulation","authors":"M. Ligthart, Mark Antonius Neerincx, K. Hindriks","doi":"10.1145/3593812","DOIUrl":"https://doi.org/10.1145/3593812","url":null,"abstract":"While interacting with a social robot, children have a need to express themselves and have their expressions acknowledged by the robot. A need that is often unaddressed by the robot, due to its limitations in understanding the expressions of children. To keep the child-robot interaction manageable the robot takes control, undermining children’s ability to co-regulate the interaction. Co-regulation is important for having a fulfilling social interaction. We developed a co-creation activity that aims to facilitate more co-regulation. Children are enabled to create sound effects, gestures, and light animations for the robot to use during their conversation. A crucial additional feature is that children are able to coordinate their involvement of the co-creation process. Results from a user study (N = 59 school children, 7-11 y.o.) showed that the co-creation activity successfully facilitated co-regulation by improving children’s agency. It also positively affected the acceptance of the robot. We furthermore identified five distinct profiles detailing the different needs and motivations children have for the level of involvement they chose during the co-creation process.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"87 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84231214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katherine J. Williams, Madeleine S. Yuh, Neera Jain
{"title":"A Computational Model of Coupled Human Trust and Self-confidence Dynamics","authors":"Katherine J. Williams, Madeleine S. Yuh, Neera Jain","doi":"10.1145/3594715","DOIUrl":"https://doi.org/10.1145/3594715","url":null,"abstract":"Autonomous systems that can assist humans with increasingly complex tasks are becoming ubiquitous. Moreover, it has been established that a human’s decision to rely on such systems is a function of both their trust in the system and their own self-confidence as it relates to executing the task of interest. Given that both under- and over-reliance on automation can pose significant risks to humans, there is motivation for developing autonomous systems that could appropriately calibrate a human’s trust or self-confidence to achieve proper reliance behavior. In this article, a computational model of coupled human trust and self-confidence dynamics is proposed. The dynamics are modeled as a partially observable Markov decision process without a reward function (POMDP/R) that leverages behavioral and self-report data as observations for estimation of these cognitive states. The model is trained and validated using data collected from 340 participants. Analysis of the transition probabilities shows that the proposed model captures the probabilistic relationship between trust, self-confidence, and reliance for all discrete combinations of high and low trust and self-confidence. The use of the proposed model to design an optimal policy to facilitate trust and self-confidence calibration is a goal of future work.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"255 1","pages":"1 - 29"},"PeriodicalIF":5.1,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76167032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"“Who said that?” Applying the Situation Awareness Global Assessment Technique to Social Telepresence","authors":"Adam K. Coyne, Keshav Sapkota, C. McGinn","doi":"10.1145/3592801","DOIUrl":"https://doi.org/10.1145/3592801","url":null,"abstract":"As with all remotely-controlled robots, successful teleoperation of social and telepresence robots relies greatly on operator situation awareness, however existing situation awareness measurements, most being originally created for military purposes, are not adapted to the context of social interaction. We propose an objective technique for telepresence evaluation based on the widely-accepted Situation Awareness Global Assessment Technique (SAGAT), adjusted to suit social contexts. This was trialled in a between-subjects participant study (n = 56), comparing the effect of mono and spatial (binaural) audio feedback on operator situation awareness during robot teleoperation in a simulated social telepresence scenario. Subjective data was also recorded, including questions adapted from Witmer and Singer’s Presence Questionnaire, as well as qualitative feedback from participants. No significant differences in situation awareness measurements were detected, however correlations observed between measures call for further research. This study and its findings are a potential starting point for the development of social situation awareness assessment techniques, which can inform future social and telepresence robot design decisions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"48 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81600542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification","authors":"A. Latupeirissa, C. Panariello, R. Bresin","doi":"10.1145/3585277","DOIUrl":"https://doi.org/10.1145/3585277","url":null,"abstract":"This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models. We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement. We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study. Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"19 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91362764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}