{"title":"The Sound of Swarm. Auditory Description of Swarm Robotic Movements","authors":"Maria Mannone, V. Seidita, A. Chella","doi":"10.1145/3596203","DOIUrl":"https://doi.org/10.1145/3596203","url":null,"abstract":"Movements of robots in a swarm can be mapped to sounds, highlighting the group behavior through the coordinated and simultaneous variations of musical parameters across time. The vice versa is also possible: sound parameters can be mapped to robotic motion parameters, giving instructions through sound. In this article, we first develop a theoretical framework to relate musical parameters such as pitch, timbre, loudness, and articulation (for each time) with robotic parameters such as position, identity, motor status, and sensor status. We propose a definition of musical spaces as Hilbert spaces, and musical paths between parameters as elements of bigroupoids, generalizing existing conceptions of musical spaces. The use of Hilbert spaces allows us to build up quantum representations of musical states, inheriting quantum computing resources, already used for robotic swarms. We present the theoretical framework and then some case studies as toy examples. In particular, we discuss a 2D video and matrix simulation with two robo-caterpillars; a 2D simulation of 10 robo-ants with Webots; a 3D simulation of three robo-fish in an underwater search&rescue mission.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83951315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"It Takes Two: using Co-creation to Facilitate Child-Robot Co-regulation","authors":"M. Ligthart, Mark Antonius Neerincx, K. Hindriks","doi":"10.1145/3593812","DOIUrl":"https://doi.org/10.1145/3593812","url":null,"abstract":"While interacting with a social robot, children have a need to express themselves and have their expressions acknowledged by the robot. A need that is often unaddressed by the robot, due to its limitations in understanding the expressions of children. To keep the child-robot interaction manageable the robot takes control, undermining children’s ability to co-regulate the interaction. Co-regulation is important for having a fulfilling social interaction. We developed a co-creation activity that aims to facilitate more co-regulation. Children are enabled to create sound effects, gestures, and light animations for the robot to use during their conversation. A crucial additional feature is that children are able to coordinate their involvement of the co-creation process. Results from a user study (N = 59 school children, 7-11 y.o.) showed that the co-creation activity successfully facilitated co-regulation by improving children’s agency. It also positively affected the acceptance of the robot. We furthermore identified five distinct profiles detailing the different needs and motivations children have for the level of involvement they chose during the co-creation process.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84231214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katherine J. Williams, Madeleine S. Yuh, Neera Jain
{"title":"A Computational Model of Coupled Human Trust and Self-confidence Dynamics","authors":"Katherine J. Williams, Madeleine S. Yuh, Neera Jain","doi":"10.1145/3594715","DOIUrl":"https://doi.org/10.1145/3594715","url":null,"abstract":"Autonomous systems that can assist humans with increasingly complex tasks are becoming ubiquitous. Moreover, it has been established that a human’s decision to rely on such systems is a function of both their trust in the system and their own self-confidence as it relates to executing the task of interest. Given that both under- and over-reliance on automation can pose significant risks to humans, there is motivation for developing autonomous systems that could appropriately calibrate a human’s trust or self-confidence to achieve proper reliance behavior. In this article, a computational model of coupled human trust and self-confidence dynamics is proposed. The dynamics are modeled as a partially observable Markov decision process without a reward function (POMDP/R) that leverages behavioral and self-report data as observations for estimation of these cognitive states. The model is trained and validated using data collected from 340 participants. Analysis of the transition probabilities shows that the proposed model captures the probabilistic relationship between trust, self-confidence, and reliance for all discrete combinations of high and low trust and self-confidence. The use of the proposed model to design an optimal policy to facilitate trust and self-confidence calibration is a goal of future work.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76167032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"“Who said that?” Applying the Situation Awareness Global Assessment Technique to Social Telepresence","authors":"Adam K. Coyne, Keshav Sapkota, C. McGinn","doi":"10.1145/3592801","DOIUrl":"https://doi.org/10.1145/3592801","url":null,"abstract":"As with all remotely-controlled robots, successful teleoperation of social and telepresence robots relies greatly on operator situation awareness, however existing situation awareness measurements, most being originally created for military purposes, are not adapted to the context of social interaction. We propose an objective technique for telepresence evaluation based on the widely-accepted Situation Awareness Global Assessment Technique (SAGAT), adjusted to suit social contexts. This was trialled in a between-subjects participant study (n = 56), comparing the effect of mono and spatial (binaural) audio feedback on operator situation awareness during robot teleoperation in a simulated social telepresence scenario. Subjective data was also recorded, including questions adapted from Witmer and Singer’s Presence Questionnaire, as well as qualitative feedback from participants. No significant differences in situation awareness measurements were detected, however correlations observed between measures call for further research. This study and its findings are a potential starting point for the development of social situation awareness assessment techniques, which can inform future social and telepresence robot design decisions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81600542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From Robotics to Prosthetics: What Design and Engineering Can Do Better Together","authors":"M. Fossati, G. Grioli, M. G. Catalano, A. Bicchi","doi":"10.1145/3588323","DOIUrl":"https://doi.org/10.1145/3588323","url":null,"abstract":"This paper discusses how the disciplines of Design and Engineering are jointly addressing disability and somehow affecting its very interpretation. The discussion focuses on high-tech prostheses, where robotic devices substitute human body parts. The application of robotic technologies to prosthetics has a relatively long history. Nevertheless, only in the last decade have we witnessed applications reach the market and become available for a large base of users who were offered prostheses with superior motor and sensory performance. The process of bringing ever more advanced technologies to fruition by prosthetic users is fully ongoing today, with some promising solutions coming from robotics (such as, e.g. AI techniques or soft robotics materials) to be transferred to human use. In this transfer process, technology alone is insufficient to warrant success, and the need for a close collaboration between the Engineering domain and the Design disciplines is apparent. We address this point with specific reference to a case study, i.e. the transformation of an innovative but by-now established technology in the industrial robotics field (the “Pisa/IIT SoftHand”) into a prosthetic hand (the “SoftHand Pro”). Besides obvious technical considerations about size, connections, control, and so on, which can be addressed with a thorough technical revision of the design, what makes the profound difference between the two devices is that, as a prosthesis, the SoftHand is intended as a human body part, and not as an external tool. To reach its ultimate goals, the hand should become a part of the human user, with his body and mind. The empirical approach and tools of Designers afford the possibility to enrich the re-design process, considering the final user at the centre of the process, in a sort of renewed humanistic approach. The paper reflects this multidisciplinary approach and is structured as follows: the first part describes a cultural framework for the use of high-technology upper limb prostheses. This culture is defined through two significant relations (Users & Society; Users & Device). Inputs come from desk research conducted in different fields, ranging from Social Psychology to Medicine and Rehabilitation area. In this scenario, it is possible to extract design insights applicable to the design brief. The introduction of a robotic prosthetic hand (SoftHand Pro) and a related, single-user case study follow. The aim here is also to illustrate a process where engineering innovations are facilitated by tools from the Design field in the attempt to make the whole process coherently centred on users. Involved are all aspects, from material technology to the covering and finishing of the prosthetic device. The resulting, final prototype of the SoftHand Pro is finally presented.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73774654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification","authors":"A. Latupeirissa, C. Panariello, R. Bresin","doi":"10.1145/3585277","DOIUrl":"https://doi.org/10.1145/3585277","url":null,"abstract":"This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models. We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement. We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study. Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91362764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fielded Human-Robot Interaction for a Heterogeneous Team in the DARPA Subterranean Challenge","authors":"Danny G. Riley, E. Frew","doi":"10.1145/3588325","DOIUrl":"https://doi.org/10.1145/3588325","url":null,"abstract":"Human supervision of multiple fielded robots is a challenging task which requires a thoughtful design and implementation of both the underlying infrastructure and the human interface. It also requires a skilled human able to manage the workload and understand when to trust the autonomy, or manually intervene. We present an end-to-end system for human-robot interaction with a heterogeneous team of robots in complex, communication-limited environments. The system includes the communication infrastructure, autonomy interaction, and human interface elements. Results of the DARPA Subterranean Challenge Final Systems Competition are presented as a case study of the design and analyze the shortcomings of the system.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89329550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Affective Robots Need Therapy","authors":"Paul Bucci, David Marino, Ivan Beschastnikh","doi":"10.1145/3543514","DOIUrl":"https://doi.org/10.1145/3543514","url":null,"abstract":"Emotion researchers have begun to converge on the theory that emotions are psychologically and socially constructed. A common assumption in affective robotics is that emotions are categorical brain-body states that can be confidently modeled. But if emotions are constructed, then they are interpretive, ambiguous, and specific to an individual’s unique experience. Constructivist views of emotion pose several challenges to affective robotics: first, it calls into question the validity of attempting to obtain objective measures of emotion through rating scales or biometrics. Second, ambiguous subjective data poses a challenge to computational systems that need structured and definite data to operate. How can a constructivist view of emotion be rectified with these challenges? In this article, we look to psychotherapy for ontological, epistemic, and methodological guidance. These fields (1) already understand emotions to be intrinsically embodied, relative, and metaphorical and (2) have built up substantial knowledge informed by everyday practice. It is our hope that by using interpretive methods inspired by therapeutic approaches, HRI researchers will be able to focus on the practicalities of designing effective embodied emotional interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86531150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrian Anhuaman, Carlos Granados, William Meza, Roberto Raez
{"title":"Cogui","authors":"Adrian Anhuaman, Carlos Granados, William Meza, Roberto Raez","doi":"10.1145/3568294.3580202","DOIUrl":"https://doi.org/10.1145/3568294.3580202","url":null,"abstract":"Autistic kids have difficulties communicating with others and learning new things in an academic environment. Cogui is a robot designed for ASD children. It converses with children in a reciprocal way in order to emphasize with the kid and help them in their learning process while having fun.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74138499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bridging the Gap: Using a Game-based Approach to Raise Lay People's Awareness About Care Robots","authors":"Katharina Brunnmayr, A. Weiss","doi":"10.1145/3568294.3580125","DOIUrl":"https://doi.org/10.1145/3568294.3580125","url":null,"abstract":"As people's expectations regarding robots are still mostly shaped by the media and Science Fiction, there exists a gap between imaginaries of robots and the state-of-the-art of robotic technologies. Care robots are one example of existing robots that the general public has little awareness about. In this report, we introduce a card-based game prototype developed with the goal to bridge this gap and explore how people conceive of existing care robots as a part of their daily lives. Based on the trial game runs, we conclude that game-based approach is effective as a device to inform participants in a playful setting about existing care robots and to elicit conversations about the role such robots could play in their lives. In the future, we plan to adapt the prototype and create a design game prototype to develop novel use cases for care robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73987901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}