Matthew V. Law, Nnamdi Nwagwu, Amritansh Kwatra, Seo-young Lee, Daniel M. DiAngelis, Naifang Yu, Gonzalo Gonzalez-Pumariega, Amit Rajesh, Guy Hoffman
{"title":"Affinity Diagramming with a Robot","authors":"Matthew V. Law, Nnamdi Nwagwu, Amritansh Kwatra, Seo-young Lee, Daniel M. DiAngelis, Naifang Yu, Gonzalo Gonzalez-Pumariega, Amit Rajesh, Guy Hoffman","doi":"10.1145/3641514","DOIUrl":"https://doi.org/10.1145/3641514","url":null,"abstract":"We investigate what it might look like for a robot to work with a human on a needfinding design task using an affinity diagram. While some recent projects have examined how human-robot teams might explore solutions to design problems, human-robot collaboration in the sensemaking aspects of the design process has not been studied. Designers use affinity diagrams to make sense of unstructured information by clustering paper notes on a work surface. To explore human-robot collaboration on a sensemaking design activity, we developed HIRO, an autonomous robot that constructs affinity diagrams with humans. In a within-user study, 56 participants affinity-diagrammed themes to characterize needs in quotes taken from real-world user data, once alone, and once with HIRO. Users spent more time on the task with HIRO than alone, without strong evidence for corresponding effects on cognitive load. In addition, a majority of participants said they preferred to work with HIRO. From post-interaction interviews, we identified eight themes leading to four guidelines for robots that collaborate with humans on sensemaking design tasks: (1) account for the robot’s speed; (2) pursue mutual understanding rather than just correctness; (3) identify opportunities for constructive disagreements; (4) use other modes of communication in addition to physical materials.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140476126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tom Williams, Cynthia Matuszek, Ross Mead, Nick Depalma
{"title":"Scarecrows in Oz: The Use of Large Language Models in HRI","authors":"Tom Williams, Cynthia Matuszek, Ross Mead, Nick Depalma","doi":"10.1145/3606261","DOIUrl":"https://doi.org/10.1145/3606261","url":null,"abstract":"\u0000 The proliferation of Large Language Models (LLMs) presents both a critical design challenge and a remarkable opportunity for the field of Human–Robot Interaction (HRI). While the direct deployment of LLMs on interactive robots may be unsuitable for reasons of ethics, safety, and control, LLMs might nevertheless provide a promising baseline technique for many elements of HRI. Specifically, in this article, we argue for the use of LLMs as\u0000 Scarecrows\u0000 : “brainless,” straw-man black-box modules integrated into robot architectures for the purpose of quickly enabling full-pipeline solutions, much like the use of “Wizard of Oz” (WoZ) and other human-in-the-loop approaches. We explicitly acknowledge that these Scarecrows, rather than providing a satisfying or scientifically complete solution, incorporate a form of the wisdom of the crowd and, in at least some cases, will ultimately need to be replaced or supplemented by a robust and theoretically motivated solution. We provide examples of how Scarecrows could be used in language-capable robot architectures as useful placeholders and suggest initial reporting guidelines for authors, mirroring existing guidelines for the use and reporting of WoZ techniques.\u0000","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140485011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Trafton, J. McCurry, Kevin Zish, Chelsea R. Frazier
{"title":"The Perception of Agency","authors":"J. Trafton, J. McCurry, Kevin Zish, Chelsea R. Frazier","doi":"10.1145/3640011","DOIUrl":"https://doi.org/10.1145/3640011","url":null,"abstract":"The perception of agency in human robot interaction has become increasingly important as robots become more capable and more social. There are, however, no accepted or consistent methods of measuring perceived agency; researchers currently use a wide range of techniques and surveys. We provide a definition of perceived agency and from that definition we create and psychometrically validate a scale to measure perceived agency. We then perform a scale evaluation by comparing the PA scale constructed in experiment 1 to two other existing scales. We find that our PA and PA-R (Perceived Agency - Rasch) scales provide a better fit to empirical data than existing measures. We also perform scale validation by showing that our scale shows the hypothesized relationship between perceived agency and morality.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140487944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jongmin M. Lee, Temesgen Gebrekristos, Dalia De Santis, Mahdieh Nejati-Javaremi, Deepak Gopinath, Biraj Parikh, F. Mussa-Ivaldi, B. Argall
{"title":"Learning to Control Complex Robots Using High-Dimensional Body-Machine Interfaces","authors":"Jongmin M. Lee, Temesgen Gebrekristos, Dalia De Santis, Mahdieh Nejati-Javaremi, Deepak Gopinath, Biraj Parikh, F. Mussa-Ivaldi, B. Argall","doi":"10.1145/3630264","DOIUrl":"https://doi.org/10.1145/3630264","url":null,"abstract":"When individuals are paralyzed from injury or damage to the brain, upper body movement and function can be compromised. While the use of body motions to interface with machines has shown to be an effective noninvasive strategy to provide movement assistance and to promote physical rehabilitation, learning to use such interfaces to control complex machines is not well understood. In a five session study, we demonstrate that a subset of an uninjured population is able to learn and improve their ability to use a high-dimensional Body-Machine Interface (BoMI), to control a robotic arm. We use a sensor net of four inertial measurement units, placed bilaterally on the upper body, and a BoMI with the capacity to directly control a robot in six dimensions. We consider whether the way in which the robot control space is mapped from human inputs has any impact on learning. Our results suggest that the space of robot control does play a role in the evolution of human learning: specifically, though robot control in joint space appears to be more intuitive initially, control in task space is found to have a greater capacity for longer-term improvement and learning. Our results further suggest that there is an inverse relationship between control dimension couplings and task performance.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139527856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PRogramAR: Augmented Reality End-User Robot Programming","authors":"Bryce Ikeda, D. Szafir","doi":"10.1145/3640008","DOIUrl":"https://doi.org/10.1145/3640008","url":null,"abstract":"The field of end-user robot programming seeks to develop methods that empower non-expert programmers to task and modify robot operations. In doing so, researchers may enhance robot flexibility and broaden the scope of robot deployments into the real world. We introduce PRogramAR (Programming Robots using Augmented Reality), a novel end-user robot programming system that combines the intuitive visual feedback of augmented reality (AR) with the simplistic and responsive paradigm of trigger-action programming (TAP) to facilitate human-robot collaboration. Through PRogramAR, users are able to rapidly author task rules and desired reactive robot behaviors, while specifying task constraints and observing program feedback contextualized directly in the real world. PRogramAR provides feedback by simulating the robot’s intended behavior and providing instant evaluation of TAP rule executability to help end-users better understand and debug their programs during development. In a system validation, 17 end-users ranging from ages 18 to 83 used PRogramAR to program a robot to assist them in completing three collaborative tasks. Our results demonstrate how merging the benefits of AR and TAP using elements from prior robot programming research into a single novel system can successfully enhance the robot programming process for non-expert users.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139531889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards an Integrative Framework for Robot Personality Research","authors":"Anna Dobrosovestnova, Tim Reinboth, Astrid Weiss","doi":"10.1145/3640010","DOIUrl":"https://doi.org/10.1145/3640010","url":null,"abstract":"Within human-robot interaction (HRI), research on robot personality has largely drawn on trait theories and models, such as the Big Five and OCEAN. We argue that reliance on trait models in HRI has led to a limited understanding of robot personality as a question of stable traits that can be designed into a robot plus how humans with certain traits respond to particular robots. However, trait-based approaches exist alongside other ways of understanding personality including approaches focusing on more dynamic constructs such as adaptations and narratives. We suggest that a deep understanding of robot personality is only possible through a cross-disciplinary effort to integrate these different approaches. We propose an Integrative Framework for Robot Personality Research (IF), wherein robot personality is defined not as a property of the robot, nor of the human perceiving the robot, but as a complex assemblage of components at the intersection of robot design and human factors. With the IF, we aim to establish a common theoretical grounding for robot personality research that incorporates personality constructs beyond traits and treats these constructs as complementary and fundamentally interdependent.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139439125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effortless Polite Telepresence using Intention Recognition","authors":"Morteza Daneshmand, Jani Even, Takayuki Kanda","doi":"10.1145/3636433","DOIUrl":"https://doi.org/10.1145/3636433","url":null,"abstract":"Telepresence technology creates the opportunity for people that were traditionally left out of the workforce to work remotely. In the service industry, a pool of novice remote workers could teleoperate robots to perform short work stints to fill in the gaps left by the dwindling workforce. A hurdle is that consistently talking appropriately and politely imposes a severe mental burden on such novice operators and the quality of the service may suffer. In this study, we propose a teleoperation support system that lets novice remote workers talk freely without considering appropriateness and politeness while maintaining the quality of the service. The proposed system exploits intent recognition to transform casual utterances into predefined appropriate and polite utterances. We conducted a within subject user study where 23 participants played the role of novice remote operators controlling a guardsman robot in charge of monitoring customers’ behaviors. We measured the workload with and without using the proposed support system using NASA task load index questionnaires. The workload was significantly lower (p <.001) when using the proposed support system (M = 46.07, SD = 14.36) than when not using it (M = 62.74, SD = 12.70). The effect size was large (Cohen’s d = 1.23).","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138976430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Robinson, Hannah R. M. Pelikan, Katsumi Watanabe, Luisa Damiano, Oliver Bown, Mari Velonaki
{"title":"Introduction to the Special Issue on Sound in Human-Robot Interaction","authors":"F. Robinson, Hannah R. M. Pelikan, Katsumi Watanabe, Luisa Damiano, Oliver Bown, Mari Velonaki","doi":"10.1145/3632185","DOIUrl":"https://doi.org/10.1145/3632185","url":null,"abstract":"","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139006017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Reinmund, P. Salvini, Lars Kunze, Marina Jirotka, A. Winfield
{"title":"Variable Autonomy Through Responsible Robotics: Design Guidelines and Research Agenda","authors":"T. Reinmund, P. Salvini, Lars Kunze, Marina Jirotka, A. Winfield","doi":"10.1145/3636432","DOIUrl":"https://doi.org/10.1145/3636432","url":null,"abstract":"Physically embodied artificial agents, or robots, are being incorporated into various practical and social contexts, from self-driving cars for personal transportation to assistive robotics in social care. To enable these systems to better perform under changing conditions, designers have proposed to endow robots with varying degrees of autonomous capabilities and the capacity to move between them – an approach known as variable autonomy. Researchers are beginning to understand how robots with fixed autonomous capabilities influence a person’s sense of autonomy, social relations, and, as a result, notions of responsibility; however, addressing these topics in scenarios where robot autonomy dynamically changes is underexplored. To establish a research agenda for variable autonomy that emphasises the responsible design and use of robotics, we conduct a developmental review. Based on a sample of 42 papers, we provide a synthesised definition of variable autonomy to connect currently disjointed research efforts, detail research approaches in variable autonomy to strengthen the empirical basis for subsequent work, characterise the dimensions of variable autonomy, and present design guidelines for variable autonomy research based on responsible robotics.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138590343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Verónica Ahumada-Newhart, Margaret Schneider, Laurel D Riek
{"title":"The Power of Robot-mediated Play: Forming Friendships and Expressing Identity.","authors":"Verónica Ahumada-Newhart, Margaret Schneider, Laurel D Riek","doi":"10.1145/3611656","DOIUrl":"10.1145/3611656","url":null,"abstract":"<p><p>Tele-operated collaborative robots are used by many children for academic learning. However, as child-directed play is important for social-emotional learning, it is also important to understand how robots can facilitate play. In this article, we present findings from an analysis of a national, multi-year case study, where we explore how 53 children in grades K-12 (<i>n</i> = 53) used robots for self-directed play activities. The contributions of this article are as follows. First, we present empirical data on novel play scenarios that remote children created using their tele-operated robots. These play scenarios emerged in five categories of play: physical, verbal, visual, extracurricular, and wished-for play. Second, we identify two unique themes that emerged from the data-robot-mediated play as a foundational support of general friendships and as a foundational support of self-expression and identity. Third, our work found that robot-mediated play provided benefits similar to in-person play. Findings from our work will inform novel robot and HRI design for tele-operated and social robots that facilitate self-directed play. Findings will also inform future interdisciplinary studies on robot-mediated play.</p>","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10593410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50158967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}