ACM Transactions on Human-Robot Interaction最新文献

筛选
英文 中文
Brain-Behavior Relationships of Trust in Shared Space Human-Robot Collaboration 共享空间人机协作中信任的脑-行为关系
ACM Transactions on Human-Robot Interaction Pub Date : 2023-11-10 DOI: 10.1145/3632149
Sarah K. Hopko, Yinsu Zhang, Aakash Yadav, Prabhakar R. Pagilla, Ranjana K. Mehta
{"title":"Brain-Behavior Relationships of Trust in Shared Space Human-Robot Collaboration","authors":"Sarah K. Hopko, Yinsu Zhang, Aakash Yadav, Prabhakar R. Pagilla, Ranjana K. Mehta","doi":"10.1145/3632149","DOIUrl":"https://doi.org/10.1145/3632149","url":null,"abstract":"Trust in human-robot collaboration is an essential consideration that relates to operator performance, utilization, and experience. While trust’s importance is understood, the state-of-the-art methods to study trust in automation, like surveys, drastically limit the types of insights that can be made. Improvements in measuring techniques can provide a granular understanding of influencers like robot reliability and their subsequent impact on human behavior and experience. This investigation quantifies the brain-behavior relationships associated with trust manipulation in shared space human-robot collaboration (HRC) to advance the scope of metrics to study trust. Thirty-eight participants, balanced by sex, were recruited to perform an assembly task with a collaborative robot under reliable and unreliable robot conditions. Brain imaging, psychological and behavioral eye-tracking, quantitative and qualitative performance, and subjective experiences were monitored. Results from this investigation identify specific information processing and cognitive strategies that result in identified trust-related behaviors, that were found to be sex-specific. The use of covert measurements of trust can reveal insights that humans cannot consciously report, thus shedding light on processes systematically overlooked by subjective measures. Our findings connect a trust influencer (robot reliability) to upstream cognition and downstream human behavior and are enabled by the utilization of granular metrics.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135138234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Which Voice for which Robot? Designing Robot Voices that Indicate Robot Size 哪种声音适合哪个机器人?设计指示机器人大小的机器人声音
ACM Transactions on Human-Robot Interaction Pub Date : 2023-11-08 DOI: 10.1145/3632124
Kerstin Fischer, Oliver Niebuhr
{"title":"Which Voice for which Robot? Designing Robot Voices that Indicate Robot Size","authors":"Kerstin Fischer, Oliver Niebuhr","doi":"10.1145/3632124","DOIUrl":"https://doi.org/10.1145/3632124","url":null,"abstract":"Many social robots will have the capacity to interact via speech in the future, and thus they will have to have a voice. However, so far it is unclear how we can create voices that fit their robotic speakers. In this paper, we explore how robot voices can be designed to fit the size of the respective robot. We therefore investigate the acoustic correlates of human voices and body size. In Study I, we analyzed 163 speech samples in connection with their speakers’ body size and body height. Our results show that specific acoustic parameters are significantly associated with body height, and to a lesser degree to body weight, but that different features are relevant for female and male voices. In Study II, we tested then for female and male voices to what extent the acoustic features identified can be used to create voices that are reliably associated with the size of robots. The results show that the acoustic features identified provide reliable clues to whether a large or a small robot is speaking.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135391105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assistance in Teleoperation of Redundant Robots through Predictive Joint Maneuvering 基于预测关节机动的冗余机器人遥操作辅助
ACM Transactions on Human-Robot Interaction Pub Date : 2023-11-03 DOI: 10.1145/3630265
Connor Brooks, Wyatt Rees, Daniel Szafir
{"title":"Assistance in Teleoperation of Redundant Robots through Predictive Joint Maneuvering","authors":"Connor Brooks, Wyatt Rees, Daniel Szafir","doi":"10.1145/3630265","DOIUrl":"https://doi.org/10.1145/3630265","url":null,"abstract":"In teleoperation of redundant robotic manipulators, translating an operator’s end effector motion command to joint space can be a tool for maintaining feasible and precise robot motion. Through optimizing redundancy resolution, the control system can ensure the end effector maintains maneuverability by avoiding joint limits and kinematic singularities. In autonomous motion planning, this optimization can be done over an entire trajectory to improve performance over local optimization. However, teleoperation involves a human-in-the-loop who determines the trajectory to be executed through a dynamic sequence of motion commands. We present two systems, PrediKCT and PrediKCS, for utilizing a predictive model of operator commands in order to accomplish this redundancy resolution in a manner that considers future expected motion during teleoperation. Using a probabilistic model of operator commands allows optimization over an expected trajectory of future motion rather than consideration of local motion alone. Evaluation through a user study demonstrates improved control outcomes from this predictive redundancy resolution over minimum joint velocity solutions and inverse kinematics-based motion controllers.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135818720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robots’ “Woohoo” and “Argh” can Enhance Users’ Emotional and Social Perceptions: An Exploratory Study on Non-Lexical Vocalizations and Non-Linguistic Sounds 机器人的“Woohoo”和“Argh”可以增强用户的情感感知和社会感知——非词汇发声和非语言发声的探索性研究
ACM Transactions on Human-Robot Interaction Pub Date : 2023-10-17 DOI: 10.1145/3626185
Xiaozhen Liu, Jiayuan Dong, Myounghoon Jeon
{"title":"Robots’ “Woohoo” and “Argh” can Enhance Users’ Emotional and Social Perceptions: An Exploratory Study on Non-Lexical Vocalizations and Non-Linguistic Sounds","authors":"Xiaozhen Liu, Jiayuan Dong, Myounghoon Jeon","doi":"10.1145/3626185","DOIUrl":"https://doi.org/10.1145/3626185","url":null,"abstract":"As robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers’ attention. Because emotions play a crucial role in social interactions, research has been conducted on conveying emotions via speech. Our study sought to investigate the synchronization of multimodal interaction in human-robot interaction (HRI). We conducted a within-subjects exploratory study with 40 participants to investigate the effects of non-speech sounds (natural voice, synthesized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception with emotional body gestures of an anthropomorphic robot (Pepper). While listening to a fairytale with the participant, a humanoid robot responded to the story with a recorded emotional non-speech sounds and gestures. Participants showed significantly higher emotion recognition accuracy from the natural voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which is in line with previous research. The natural voice also induced higher trust, naturalness, and preference, compared to other sounds. Interestingly, the musical sound mostly showed lower perception ratings, even compared to the no sound. Results are discussed with design guidelines for emotional cues from social robots and future research directions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136034445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimental Assessment of Human-Robot Teaming for Multi-Step Remote Manipulation with Expert Operators 专家操作下多步远程操作人机组队的实验评估
ACM Transactions on Human-Robot Interaction Pub Date : 2023-10-17 DOI: 10.1145/3618258
Claudia Pérez-D'Arpino, Rebecca P. Khurshid, Julie A. Shah
{"title":"Experimental Assessment of Human-Robot Teaming for Multi-Step Remote Manipulation with Expert Operators","authors":"Claudia Pérez-D'Arpino, Rebecca P. Khurshid, Julie A. Shah","doi":"10.1145/3618258","DOIUrl":"https://doi.org/10.1145/3618258","url":null,"abstract":"Remote robot manipulation with human control enables applications where safety and environmental constraints are adverse to humans (e.g. underwater, space robotics and disaster response) or the complexity of the task demands human-level cognition and dexterity (e.g. robotic surgery and manufacturing). These systems typically use direct teleoperation at the motion level, and are usually limited to low-DOF arms and 2D perception. Improving dexterity and situational awareness demands new interaction and planning workflows. We explore the use of human-robot teaming through teleautonomy with assisted planning for remote control of a dual-arm dexterous robot for multi-step manipulation, and conduct a within-subjects experimental assessment (n=12 expert users) to compare it with direct teleoperation with an imitation controller with 2D and 3D perception, as well as teleoperation through a teleautonomy interface. The proposed assisted planning approach achieves task times comparable with direct teleoperation while improving other objective and subjective metrics, including re-grasps, collisions, and TLX workload. Assisted planning in the teleautonomy interface achieves faster task execution, and removes a significant interaction with the operator’s expertise level, resulting in a performance equalizer across users. Our study protocol, metrics and models for statistical analysis might also serve as a general benchmarking framework in teleoperation domains. Accompanying video and reference R code: https://people.csail.mit.edu/cdarpino/THRIteleop/","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135944946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
IMPRINT: Interactional Dynamics-aware Motion Prediction in Teams using Multimodal Context 印记:使用多模态上下文的团队中的交互式动态感知运动预测
ACM Transactions on Human-Robot Interaction Pub Date : 2023-10-16 DOI: 10.1145/3626954
Mohammad Samin Yasar, Md Mofijul Islam, Tariq Iqbal
{"title":"IMPRINT: Interactional Dynamics-aware Motion Prediction in Teams using Multimodal Context","authors":"Mohammad Samin Yasar, Md Mofijul Islam, Tariq Iqbal","doi":"10.1145/3626954","DOIUrl":"https://doi.org/10.1145/3626954","url":null,"abstract":"Robots are moving from working in isolation to working with humans as a part of human-robot teams. In such situations, they are expected to work with multiple humans and need to understand and predict the team members’ actions. To address this challenge, in this work, we introduce IMPRINT, a multi-agent motion prediction framework that models the interactional dynamics and incorporates the multimodal context (e.g., data from RGB and depth sensors and skeleton joint positions) to accurately predict the motion of all the agents in a team. In IMPRINT, we propose an Interaction module that can extract the intra-agent and inter-agent dynamics before fusing them to obtain the interactional dynamics. Furthermore, we propose a Multimodal Context module that incorporates multimodal context information to improve multi-agent motion prediction. We evaluated IMPRINT by comparing its performance on human-human and human-robot team scenarios against state-of-the-art methods. The results suggest that IMPRINT outperformed all other methods over all evaluated temporal horizons. Additionally, we provide an interpretation of how IMPRINT incorporates the multimodal context information from all the modalities during multi-agent motion prediction. The superior performance of IMPRINT provides a promising direction to integrate motion prediction with robot perception and enable safe and effective human-robot collaboration.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136078655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Face2Gesture: Translating Facial Expressions Into Robot Movements Through Shared Latent Space Neural Networks Face2Gesture:通过共享潜在空间神经网络将面部表情转化为机器人动作
ACM Transactions on Human-Robot Interaction Pub Date : 2023-10-04 DOI: 10.1145/3623386
Michael Suguitan, Nick DePalma, Guy Hoffman, Jessica Hodgins
{"title":"Face2Gesture: Translating Facial Expressions Into Robot Movements Through Shared Latent Space Neural Networks","authors":"Michael Suguitan, Nick DePalma, Guy Hoffman, Jessica Hodgins","doi":"10.1145/3623386","DOIUrl":"https://doi.org/10.1145/3623386","url":null,"abstract":"In this work, we present a method for personalizing human-robot interaction by using emotive facial expressions to generate affective robot movements. Movement is an important medium for robots to communicate affective states, but the expertise and time required to craft new robot movements promotes a reliance on fixed preprogrammed behaviors. Enabling robots to respond to multimodal user input with newly generated movements could stave off staleness of interaction and convey a deeper degree of affective understanding than current retrieval-based methods. We use autoencoder neural networks to compress robot movement data and facial expression images into a shared latent embedding space. Then, we use a reconstruction loss to generate movements from these embeddings and triplet loss to align the embeddings by emotion classes rather than data modality. To subjectively evaluate our method, we conducted a user survey and found that generated happy and sad movements could be matched to their source face images. However, angry movements were most often mismatched to sad images. This multimodal data-driven generative method can expand an interactive agent’s behavior library and could be adopted for other multimodal affective applications.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135596832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Do this instead” – Robots that Adequately Respond to Corrected Instructions “做这个”——对纠正指令做出充分反应的机器人
ACM Transactions on Human-Robot Interaction Pub Date : 2023-09-22 DOI: 10.1145/3623385
Christopher Thierauf, Ravenna Thielstrom, Bradley Oosterveld, Will Becker, Matthias Scheutz
{"title":"“Do this instead” – Robots that Adequately Respond to Corrected Instructions","authors":"Christopher Thierauf, Ravenna Thielstrom, Bradley Oosterveld, Will Becker, Matthias Scheutz","doi":"10.1145/3623385","DOIUrl":"https://doi.org/10.1145/3623385","url":null,"abstract":"Natural language instructions are effective at tasking autonomous robots and for teaching them new knowledge quickly. Yet, human instructors are not perfect and are likely to make mistakes at times, and will correct themselves when they notice errors in their own instructions. In this paper, we introduce a complete system for robot behaviors to handle such corrections, during both task instruction and action execution. We then demonstrate its operation in an integrated cognitive robotic architecture through spoken language in two tasks: a navigation and retrieval task and a meal assembly task. Verbal corrections occur before, during, and after verbally taught sequences of tasks, demonstrating that the proposed methods enable fast corrections not only of the semantics generated from the instructions, but also of overt robot behavior in a manner shown to be reasonable when compared to human behavior and expectations.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136060335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Learning from Demonstrations, Corrections, and Preferences during Physical Human-Robot Interaction 在物理人机交互过程中,从演示、修正和偏好中统一学习
ACM Transactions on Human-Robot Interaction Pub Date : 2023-09-22 DOI: 10.1145/3623384
Shaunak A. Mehta, Dylan P. Losey
{"title":"Unified Learning from Demonstrations, Corrections, and Preferences during Physical Human-Robot Interaction","authors":"Shaunak A. Mehta, Dylan P. Losey","doi":"10.1145/3623384","DOIUrl":"https://doi.org/10.1145/3623384","url":null,"abstract":"Humans can leverage physical interaction to teach robot arms. This physical interaction takes multiple forms depending on the task, the user, and what the robot has learned so far. State-of-the-art approaches focus on learning from a single modality, or combine some interaction types. Some methods do so by assuming that the robot has prior information about the features of the task and the reward structure. By contrast, in this paper we introduce an algorithmic formalism that unites learning from demonstrations, corrections, and preferences. Our approach makes no assumptions about the tasks the human wants to teach the robot; instead, we learn a reward model from scratch by comparing the human’s input to nearby alternatives, i.e., trajectories close to the human’s feedback. We first derive a loss function that trains an ensemble of reward models to match the human’s demonstrations, corrections, and preferences. The type and order of feedback is up to the human teacher: we enable the robot to collect this feedback passively or actively. We then apply constrained optimization to convert our learned reward into a desired robot trajectory. Through simulations and a user study we demonstrate that our proposed approach more accurately learns manipulation tasks from physical human interaction than existing baselines, particularly when the robot is faced with new or unexpected objectives. Videos of our user study are available at: https://youtu.be/FSUJsTYvEKU","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136062298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
UHTP: A User-Aware Hierarchical Task Planning Framework for Communication-Free, Mutually-Adaptive Human-Robot Collaboration http:面向无通信、相互适应的人机协作的用户感知分层任务规划框架
ACM Transactions on Human-Robot Interaction Pub Date : 2023-09-22 DOI: 10.1145/3623387
Kartik Ramachandruni, Cassandra Kent, Sonia Chernova
{"title":"UHTP: A User-Aware Hierarchical Task Planning Framework for Communication-Free, Mutually-Adaptive Human-Robot Collaboration","authors":"Kartik Ramachandruni, Cassandra Kent, Sonia Chernova","doi":"10.1145/3623387","DOIUrl":"https://doi.org/10.1145/3623387","url":null,"abstract":"Collaborative human-robot task execution approaches require mutual adaptation, allowing both the human and robot partners to take active roles in action selection and role assignment to achieve a single shared goal. Prior works have utilized a leader-follower paradigm in which either agent must follow the actions specified by the other agent. We introduce the User-aware Hierarchical Task Planning (UHTP) framework, a communication-free human-robot collaborative approach for adaptive execution of multi-step tasks that moves beyond the leader-follower paradigm. Specifically, our approach enables the robot to observe the human, perform actions that support the human’s decisions, and actively select actions that maximize the expected efficiency of the collaborative task. In turn, the human chooses actions based on their observation of the task and the robot, without being dictated by a scheduler or the robot. We evaluate UHTP both in simulation and in a human subjects experiment of a collaborative drill assembly task. Our results show that UHTP achieves more efficient task plans and shorter task completion times than non-adaptive baselines across a wide range of human behaviors, that interacting with a UHTP-controlled robot reduces the human’s cognitive workload, and that humans prefer to work with our adaptive robot over a fixed-policy alternative.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136059980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信