Nathan Tsoi, Rachel Sterneck, Xuan Zhao, Marynel Vázquez
{"title":"Influence of Simulation and Interactivity on Human Perceptions of a Robot During Navigation Tasks","authors":"Nathan Tsoi, Rachel Sterneck, Xuan Zhao, Marynel Vázquez","doi":"10.1145/3675784","DOIUrl":"https://doi.org/10.1145/3675784","url":null,"abstract":"In Human-Robot Interaction, researchers typically utilize in-person studies to collect subjective perceptions of a robot. In addition, videos of interactions and interactive simulations (where participants control an avatar that interacts with a robot in a virtual world) have been used to quickly collect human feedback at scale. How would human perceptions of robots compare between these methodologies? To investigate this question, we conducted a 2x2 between-subjects study (N=160), which evaluated the effect of the interaction environment (Real vs. Simulated environment) and participants’ interactivity during human-robot encounters (Interactive participation vs. Video observations) on perceptions about a robot (competence, discomfort, social presentation, and social information processing) for the task of navigating in concert with people. We also studied participants’ workload across the experimental conditions. Our results revealed a significant difference in the perceptions of the robot between the real environment and the simulated environment. Furthermore, our results showed differences in human perceptions when people watched a video of an encounter versus taking part in the encounter. Finally, we found that simulated interactions and videos of the simulated encounter resulted in a higher workload than real-world encounters and videos thereof. Our results suggest that findings from video and simulation methodologies may not always translate to real-world human-robot interactions. In order to allow practitioners to leverage learnings from this study and future researchers to expand our knowledge in this area, we provide guidelines for weighing the tradeoffs between different methodologies.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141640693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Converging Measures and an Emergent Model: A Meta-Analysis of Human-Machine Trust Questionnaires","authors":"Yosef Razin, K. Feigh","doi":"10.1145/3677614","DOIUrl":"https://doi.org/10.1145/3677614","url":null,"abstract":"Trust is crucial for technological acceptance, continued usage, and teamwork. However, human-robot trust, and human-machine trust more generally, suffer from terminological disagreement and construct proliferation. By comparing, mapping, and analyzing well-constructed trust survey instruments, this work uncovers a consensus structure of trust in human-machine interaction. To do so, we identify the most frequently cited and best-validated human-machine and human-robot trust questionnaires as well as the best-established factors that form the dimensions and antecedents of such trust. To reduce both confusion and construct proliferation, we provide a detailed mapping of terminology between questionnaires. Furthermore, we perform a meta-analysis of the regression models which emerged from the experiments that employed multi-factorial survey instruments. Based on this meta-analysis, we provide the most complete, experimentally validated model of human-machine and human-robot trust to date. This convergent model establishes an integrated framework for future research. It determines the current boundaries of trust measurement and where further investigation and validation are necessary. We close by discussing how to choose an appropriate trust survey instrument and how to design for trust. By identifying the internal workings of trust, a more complete basis for measuring trust is developed that is widely applicable.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141651115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clare Lohrmann, Maria Stull, A. Roncone, Bradley Hayes
{"title":"Generating Pattern-Based Conventions for Predictable Planning in Human-Robot Collaboration","authors":"Clare Lohrmann, Maria Stull, A. Roncone, Bradley Hayes","doi":"10.1145/3659061","DOIUrl":"https://doi.org/10.1145/3659061","url":null,"abstract":"For humans to effectively work with robots, they must be able to predict the actions and behaviors of their robot teammates rather than merely react to them. While there are existing techniques enabling robots to adapt to human behavior, there is a demonstrated need for methods that explicitly improve humans’ ability to understand and predict robot behavior at multi-task timescales. In this work, we propose a method leveraging the innate human propensity for pattern recognition in order to improve team dynamics in human-robot teams and to make robots more predictable to the humans that work with them. Patterns are a cognitive tool that humans use and rely on often, and the human brain is in many ways primed for pattern recognition and usage. We propose Pattern-Aware Convention-setting for Teaming (PACT), an entropy-based algorithm that identifies and imposes appropriate patterns over a robot’s planner or policy over long time horizons. These patterns are autonomously generated and chosen via an algorithmic process that considers human-perceptible features and characteristics derived from the tasks to be completed, and as such, produces behavior that is easier for humans to identify and predict. Our evaluation shows that PACT contributes to significant improvements in team dynamics and teammate perceptions of the robot, as compared to robots that utilize traditionally ‘optimal’ plans and robots utilizing unoptimized patterns.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141693071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seth Freeman, Shaden Moss, John L. Salmon, Marc D. Killpack
{"title":"Classification of Co-manipulation Modus with Human-Human Teams for Future Application to Human-Robot Systems","authors":"Seth Freeman, Shaden Moss, John L. Salmon, Marc D. Killpack","doi":"10.1145/3659059","DOIUrl":"https://doi.org/10.1145/3659059","url":null,"abstract":"Despite the existence of robots that can lift heavy loads, robots that can help people move heavy objects are not readily available. This paper makes progress towards effective human-robot co-manipulation by studying 30 human-human dyads that collaboratively manipulated an object weighing 27 kg without being co-located (i.e. participants were at either end of the extended object). Participants maneuvered around different obstacles with the object while exhibiting one of four modi–the manner or objective with which a team moves an object together–at any given time. Using force and motion signals to classify modus or behavior was the primary objective of this work. Our results showed that two of the originally proposed modi were very similar, such that one could effectively be removed while still spanning the space of common behaviors during our co-manipulation tasks. The three modi used in classification were quickly, smoothly and avoiding obstacles. Using a deep convolutional neural network (CNN), we classified three modi with up to 89% accuracy from a validation set. The capability to detect or classify modus during co-manipulation has the potential to greatly improve human-robot performance by helping to define appropriate robot behavior or controller parameters depending on the objective or modus of the team.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141349806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth J. Carter, Peerat Vichivanives, Ruijia Xing, Laura M. Hiatt, Stephanie Rosenthal
{"title":"Perceptions of a Robot that Interleaves Tasks for Multiple Users","authors":"Elizabeth J. Carter, Peerat Vichivanives, Ruijia Xing, Laura M. Hiatt, Stephanie Rosenthal","doi":"10.1145/3663486","DOIUrl":"https://doi.org/10.1145/3663486","url":null,"abstract":"When robots have multiple tasks to perform, they must determine the order in which to complete them. Interleaving tasks is efficient for the robot trying to finish its to-do list, but it may be less satisfying for a human whose request was delayed in favor of schedule efficiency. Following online research that examined delays with various motivations [4, 27], we created two in-person studies in which participants’ tasks were impacted by the robot’s other tasks. In the first, participants either requested a task for the robot to complete on their behalf or watched the robot performing tasks for other people. We measured how their opinions changed depending on whether their task’s completion was delayed due to another participant’s task or they were observing without a task of their own. In the second, participants had a robot walk them to an office and became delayed as the robot detoured to another location. We measured how opinions of the robot changed depending on who requested the detour task and the length of the detour. Overall, participants positively viewed task interleaving as long as the delay and inconvenience imposed by someone else’s task were small and the task was well-justified. Also, observers often had lower opinions of the robot than participants who requested tasks, highlighting a concern for online research.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141103333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Ayub, Zachary De Francesco, Jainish Mehta, Khaled Yaakoub Agha, Patrick Holthaus, C. Nehaniv, Kerstin Dautenhahn
{"title":"A Human-Centered View of Continual Learning: Understanding Interactions, Teaching Patterns, and Perceptions of Human Users Towards a Continual Learning Robot in Repeated Interactions","authors":"Ali Ayub, Zachary De Francesco, Jainish Mehta, Khaled Yaakoub Agha, Patrick Holthaus, C. Nehaniv, Kerstin Dautenhahn","doi":"10.1145/3659110","DOIUrl":"https://doi.org/10.1145/3659110","url":null,"abstract":"\u0000 Continual learning (CL) has emerged as an important avenue of research in recent years, at the intersection of Machine Learning (ML) and Human-Robot Interaction (HRI), to allow robots to continually learn in their environments over long-term interactions with humans. Most research in continual learning, however, has been\u0000 robot-centered\u0000 to develop continual learning algorithms that can quickly learn new information on systematically collected static datasets. In this paper, we take a\u0000 human-centered\u0000 approach to continual learning, to understand how humans interact with, teach, and perceive continual learning robots over the long term, and if there are variations in their teaching styles. We developed a socially guided continual learning system that integrates CL models for object recognition with a mobile manipulator robot and allows humans to directly teach and test the robot in real time over multiple sessions. We conducted an in-person study with 60 participants who interacted with the continual learning robot in 300 sessions with 5 sessions per participant. In this between-participant study, we used three different CL models deployed on a mobile manipulator robot. An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users indicating the need for personalized adaptation to their distinct teaching styles. Our analysis shows that the constrained experimental setups that have been widely used to test most CL models are not adequate, as real users interact with and teach continual learning robots in a variety of ways. Finally, our analysis shows that although users have concerns about continual learning robots being deployed in our daily lives, they mention that with further improvements continual learning robots could assist older adults and people with disabilities in their homes.\u0000","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141107929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Balancing Human Likeness in Social Robots: Impact on Children’s Lexical Alignment and Self-disclosure for Trust Assessment","authors":"Natalia Calvo-Barajas, Anastasia Akkuzu, Ginevra Castellano","doi":"10.1145/3659062","DOIUrl":"https://doi.org/10.1145/3659062","url":null,"abstract":"While there is evidence that human-like characteristics in robots could benefit child-robot interaction in many ways, open questions remain about the appropriate degree of human likeness that should be implemented in robots to avoid adverse effects on acceptance and trust. This study investigates how human likeness, appearance and behavior, influence children’s social and competency trust in a robot. We first designed two versions of the Furhat robot with visual and auditory human-like and machine-like cues validated in two online studies. Secondly, we created verbal behaviors where human likeness was manipulated as responsiveness regarding the robot’s lexical matching. Then, 52 children (7-10 years old) played a storytelling game in a between-subjects experimental design. Results show that the conditions did not affect subjective trust measures. However, objective measures showed that human likeness affects trust differently. While low human-like appearance enhanced social trust, high human-like behavior improved children’s acceptance of the robot’s task-related suggestions. This work provides empirical evidence on manipulating facial features and behavior to control human likeness in a robot with a highly human-like morphology. We discuss the implications and importance of balancing human likeness in robot design and its impacts on task performance, as it directly impacts trust-building with children.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141106639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Children's Acceptance of a Domestic Social Robot: How It Evolves over Time","authors":"Chiara de Jong, J. Peter, R. Kühne, Àlex Barco","doi":"10.1145/3638066","DOIUrl":"https://doi.org/10.1145/3638066","url":null,"abstract":"Little is known about children's long-term acceptance of social robots; whether different types of users exist; and what reasons children have not to use a robot. Moreover, the literature is inconclusive about how the measurement of children's robot acceptance (i.e., self-report or observational) affects the findings. We relied on both self-report and observational data from a six-wave panel study among 321 children aged eight to nine, who were given a Cozmo robot to play with at home over the course of eight weeks. Children's robot acceptance decreased over time, with the strongest drop after two to four weeks. Children rarely rejected the robot (i.e., they did not stop using it already prior to actual adoption). They rather discontinued its use after initial adoption or alternated between using and not using the robot. The competition of other toys and lacking motivation to play with Cozmo emerged as strongest reasons for not using the robot. Self-report measures captured patterns of robot acceptance well but seemed suboptimal for precise assessments of robot use.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140454963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Gillet, Marynel Vázquez, Sean Andrist, Iolanda Leite, Sarah Sebo
{"title":"Interaction-Shaping Robotics: Robots that Influence Interactions between Other Agents","authors":"Sarah Gillet, Marynel Vázquez, Sean Andrist, Iolanda Leite, Sarah Sebo","doi":"10.1145/3643803","DOIUrl":"https://doi.org/10.1145/3643803","url":null,"abstract":"Work in Human-Robot Interaction (HRI) has investigated interactions between one human and one robot as well as human-robot group interactions. Yet, the field lacks a clear definition and understanding of the influence a robot can exert on interactions between other group members (e.g., human-to-human). In this paper, we define Interaction-Shaping Robotics (ISR), a subfield of HRI that investigates robots that influence the behaviors and attitudes exchanged between two (or more) other agents. We highlight key factors of Interaction-Shaping Robots that include the role of the robot, the robot-shaping outcome, the form of robot influence, the type of robot communication, and the timeline of the robot’s influence. We also describe three distinct structures of human-robot groups to highlight the potential of ISR in different group compositions and discuss targets for a robot’s interaction-shaping behavior. Finally, we propose areas of opportunity and challenges for future research in ISR.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139683479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perception and Action Augmentation for Teleoperation Assistance in Freeform Tele-manipulation","authors":"Tsung-Chi Lin, Achyuthan Unni Krishnan, Zhi Li","doi":"10.1145/3643804","DOIUrl":"https://doi.org/10.1145/3643804","url":null,"abstract":"Teleoperation enables controlling complex robot systems remotely, providing the ability to impart human expertise from a distance. However, these interfaces can be complicated to use as it is difficult to contextualize information about robot motion in the workspace from the limited camera feedback. Thus, it is required to study the best manner in which assistance can be provided to the operator that reduces interface complexity and effort required for teleoperation. Some techniques that provide assistance to the operator while freeform teleoperating include: 1) perception augmentation, like augmented reality visual cues and additional camera angles, increasing the information available to the operator; 2) action augmentation, like assistive autonomy and control augmentation, optimized to reduce the effort required by the operator while teleoperating. In this paper we investigate: 1) which aspects of dexterous tele-manipulation require assistance; 2) the impact of perception and action augmentation in improving teleoperation performance; 3) what factors impact the usage of assistance and how to tailor these interfaces based on the operators’ needs and characteristics. The findings from this user study and resulting post-study surveys will help identify task based and user preferred perception and augmentation features for teleoperation assistance.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140479040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}