ACM Transactions on Human-Robot Interaction最新文献

筛选
英文 中文
Understanding Human Dynamic Sampling Objectives to Enable Robot-assisted Scientific Decision Making 理解人类动态采样目标,使机器人辅助科学决策
ACM Transactions on Human-Robot Interaction Pub Date : 2023-09-13 DOI: 10.1145/3623383
Shipeng Liu, Cristina G. Wilson, Bhaskar Krishnamachari, Feifei Qian
{"title":"Understanding Human Dynamic Sampling Objectives to Enable Robot-assisted Scientific Decision Making","authors":"Shipeng Liu, Cristina G. Wilson, Bhaskar Krishnamachari, Feifei Qian","doi":"10.1145/3623383","DOIUrl":"https://doi.org/10.1145/3623383","url":null,"abstract":"Truly collaborative scientific field data collection between human scientists and autonomous robot systems requires a shared understanding of the search objectives and tradeoffs faced when making decisions. Therefore, critical to developing intelligent robots to aid human experts, is an understanding of how scientists make such decisions and how they adapt their data collection strategies when presented with new information in situ . In this study we examined the dynamic data collection decisions of 108 expert geoscience researchers using a simulated field scenario. Human data collection behaviors suggested two distinct objectives: an information-based objective to maximize information coverage, and a discrepancy-based objective to maximize hypothesis verification. We developed a highly-simplified quantitative decision model that allows the robot to predict potential human data collection locations based on the two observed human data collection objectives. Predictions from the simple model revealed a transition from information-based to discrepancy-based objective as the level of information increased. The findings will allow robotic teammates to connect experts’ dynamic science objectives with the adaptation of their sampling behaviors, and in the long term, enable the development of more cognitively-compatible robotic field assistants.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135736411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forging Productive Human-Robot Partnerships Through Task Training 通过任务训练建立富有成效的人机合作伙伴关系
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-08-31 DOI: 10.1145/3611657
Maia Stiber, Yuxiang Gao, R. Taylor, Chien-Ming Huang
{"title":"Forging Productive Human-Robot Partnerships Through Task Training","authors":"Maia Stiber, Yuxiang Gao, R. Taylor, Chien-Ming Huang","doi":"10.1145/3611657","DOIUrl":"https://doi.org/10.1145/3611657","url":null,"abstract":"Productive human-robot partnerships are vital to successful integration of assistive robots into everyday life. While prior research has explored techniques to facilitate collaboration during human-robot interaction, the work described here aims to forge productive partnerships prior to human-robot interaction, drawing upon team building activities’ aid in establishing effective human teams. Through a 2 (group membership: ingroup and outgroup) × 3 (robot error: main task errors, side task errors, and no errors) online study (N = 62), we demonstrate that 1) a non-social pre-task exercise can help form ingroup relationships; 2) an ingroup robot is perceived as a better, more committed teammate than an outgroup robot (despite the two behaving identically); and 3) participants are more tolerant of negative outcomes when working with an ingroup robot. We discuss how pre-task exercises may serve as an active task failure mitigation strategy.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75149830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Augmented Reality Visualization of Autonomous Mobile Robot Change Detection in Uninstrumented Environments 无仪器环境下自主移动机器人变化检测的增强现实可视化
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-08-21 DOI: 10.1145/3611654
Christopher M. Reardon, J. Gregory, Kerstin S Haring, Benjamin Dossett, Ori Miller, A. Inyang
{"title":"Augmented Reality Visualization of Autonomous Mobile Robot Change Detection in Uninstrumented Environments","authors":"Christopher M. Reardon, J. Gregory, Kerstin S Haring, Benjamin Dossett, Ori Miller, A. Inyang","doi":"10.1145/3611654","DOIUrl":"https://doi.org/10.1145/3611654","url":null,"abstract":"The creation of information transparency solutions to enable humans to understand robot perception is a challenging requirement for autonomous and artificially intelligent robots to impact a multitude of domains. By taking advantage of comprehensive and high-volume data from robot teammates’ advanced perception and reasoning capabilities, humans will be able to make better decisions, with significant impacts from safety to functionality. We present a solution to this challenge by coupling augmented reality (AR) with an intelligent mobile robot that is autonomously detecting novel changes in an environment. We show that the human teammate can understand and make decisions based on information shared via AR by the robot. Sharing of robot-perceived information is enabled by the robot’s online calculation of the human’s relative position, making the system robust to environments without external instrumentation such as GPS. Our robotic system performs change detection by comparing current metric sensor readings against a previous reading to identify differences. We experimentally explore the design of change detection visualizations and the aggregation of information, the impact of instruction on communication understanding, the effects of visualization and alignment error, and the relationship between situated 3D visualization in AR and human movement in the operational environment on shared situational awareness in human-robot teams. We demonstrate this novel capability and assess the effectiveness of human-robot teaming in crowdsourced data-driven studies, as well as an in-person study where participants are equipped with a commercial off-the-shelf AR headset and teamed with a small ground robot which maneuvers through the environment. The mobile robot scans for changes, which are visualized via AR to the participant. The effectiveness of this communication is evaluated through accuracy and subjective assessment metrics to provide insight into interpretation and experience.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77009714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Someone There Or Is That The TV? Detecting Social Presence Using Sound 是有人在还是电视在响?使用声音检测社会存在
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-08-18 DOI: 10.1145/3611658
Nicholas C Georgiou, Rebecca Ramnauth, Emmanuel Adéníran, Michael Lee, Lila Selin, B. Scassellati
{"title":"Is Someone There Or Is That The TV? Detecting Social Presence Using Sound","authors":"Nicholas C Georgiou, Rebecca Ramnauth, Emmanuel Adéníran, Michael Lee, Lila Selin, B. Scassellati","doi":"10.1145/3611658","DOIUrl":"https://doi.org/10.1145/3611658","url":null,"abstract":"Social robots in the home will need to solve audio identification problems to better interact with their users. This paper focuses on the classification between a) natural conversation that includes at least one co-located user and b) media that is playing from electronic sources and does not require a social response, such as television shows. This classification can help social robots detect a user’s social presence using sound. Social robots that are able to solve this problem can apply this information to assist them in making decisions, such as determining when and how to appropriately engage human users. We compiled a dataset from a variety of acoustic environments which contained either natural or media audio, including audio that we recorded in our own homes. Using this dataset, we performed an experimental evaluation on a range of traditional machine learning classifiers, and assessed the classifiers’ abilities to generalize to new recordings, acoustic conditions, and environments. We conclude that a C-Support Vector Classification (SVC) algorithm outperformed other classifiers. Finally, we present a classification pipeline that in-home robots can utilize, and discuss the timing and size of the trained classifiers, as well as privacy and ethics considerations.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73935061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-Robot Interaction 发声机器人:无意识人机交互听觉显示的设计与评价
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-08-17 DOI: 10.1145/3611655
Bastian Orthmann, Iolanda Leite, R. Bresin, Ilaria Torre
{"title":"Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-Robot Interaction","authors":"Bastian Orthmann, Iolanda Leite, R. Bresin, Ilaria Torre","doi":"10.1145/3611655","DOIUrl":"https://doi.org/10.1145/3611655","url":null,"abstract":"Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in 5 online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78106150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Driven Communicative Behaviour Generation: A Survey 数据驱动的交际行为生成:一项调查
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-08-16 DOI: 10.1145/3609235
Nurziya Oralbayeva, A. Aly, A. Sandygulova, Tony Belpaeme
{"title":"Data-Driven Communicative Behaviour Generation: A Survey","authors":"Nurziya Oralbayeva, A. Aly, A. Sandygulova, Tony Belpaeme","doi":"10.1145/3609235","DOIUrl":"https://doi.org/10.1145/3609235","url":null,"abstract":"The development of data-driven behaviour generating systems has recently become the focus of considerable attention in the fields of human-agent interaction (HAI) and human-robot interaction (HRI). Although rule-based approaches were dominant for years, these proved inflexible and expensive to develop. The difficulty of developing production rules, as well as the need for manual configuration in order to generate artificial behaviours, places a limit on how complex and diverse rule-based behaviours can be. In contrast, actual human-human interaction data collected using tracking and recording devices makes human-like multimodal co-speech behaviour generation possible using machine learning and specifically, in recent years, deep learning. This survey provides an overview of the state-of-the-art of deep learning-based co-speech behaviour generation models and offers an outlook for future research in this area.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82183664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New Design Potentials of Non-mimetic Sonification in Human-Robot Interaction 人机交互中非拟声的新设计潜力
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-08-01 DOI: 10.1145/3611646
Elias Naphausen, Andreas Muxel, J. Willmann
{"title":"New Design Potentials of Non-mimetic Sonification in Human-Robot Interaction","authors":"Elias Naphausen, Andreas Muxel, J. Willmann","doi":"10.1145/3611646","DOIUrl":"https://doi.org/10.1145/3611646","url":null,"abstract":"With the increasing use and complexity of robotic devices, the requirements for the design of human-robot interfaces are rapidly changing and call for new means of interaction and information transfer. On that scope, the discussed project – being developed by the Hybrid Things Lab at the University of Applied Sciences Augsburg and the Design Research Lab at Bauhaus-Universität Weimar – takes a first step in characterizing a novel field of research, exploring the design potentials of non-mimetic sonification in the context of human-robot interaction (HRI). Featuring an industrial 7-axis manipulator and collecting multiple information (for instance, the position of the end-effector, joint positions and forces) during manipulation, these data sets are being used for creating a novel augmented audible presence, and thus allowing new forms of interaction. As such, this paper considers (1) research parameters for non-mimetic sonification (such as pitch, volume and timbre);(2) a comprehensive empirical pursuit, including setup, exploration, and validation;(3) the overall implications of integrating these findings into a unifying human-robot interaction process. The relation between machinic and auditory dimensionality is of particular concern.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76045892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario 城市空中交通情景下基于随机技能水平的人类训练共享控制
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-06-06 DOI: 10.1145/3603194
Sooyung Byeon, Joonwon Choi, Yutong Zhang, Inseok Hwang
{"title":"Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario","authors":"Sooyung Byeon, Joonwon Choi, Yutong Zhang, Inseok Hwang","doi":"10.1145/3603194","DOIUrl":"https://doi.org/10.1145/3603194","url":null,"abstract":"This paper proposes a novel stochastic-skill-level-based shared control framework to assist human novices to emulate human experts in complex dynamic control tasks. The proposed framework aims to infer stochastic-skill-levels (SSLs) of the human novices and provide personalized assistance based on the inferred SSLs. SSL can be assessed as a stochastic variable which denotes the probability that the novice will behave similarly to experts. We propose a data-driven method which can characterize novice demonstrations as a novice model and expert demonstrations as an expert model, respectively. Then, our SSL inference approach utilizes the novice and expert models to assess the SSL of the novices in complex dynamic control tasks. The shared control scheme is designed to dynamically adjust the level of assistance based on the inferred SSL to prevent frustration or tedium during human training due to poorly imposed assistance. The proposed framework is demonstrated by a human subject experiment in a human training scenario for a remotely piloted urban air mobility (UAM) vehicle. The results show that the proposed framework can assess the SSL and tailor the assistance for an individual in real-time. The proposed framework is compared to practice-only training (no assistance) and a baseline shared control approach to test the human learning rates in the designed training scenario with human subjects. A subjective survey is also examined to monitor the user experience of the proposed framework.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74266351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Issue on “Designing the Robot Body: Critical Perspectives on Affective Embodied Interaction” “设计机器人身体:情感具身互动的批判视角”特刊简介
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-05-17 DOI: 10.1145/3594713
M. Paterson, G. Hoffman, C. Zheng
{"title":"Introduction to the Special Issue on “Designing the Robot Body: Critical Perspectives on Affective Embodied Interaction”","authors":"M. Paterson, G. Hoffman, C. Zheng","doi":"10.1145/3594713","DOIUrl":"https://doi.org/10.1145/3594713","url":null,"abstract":"A","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75515966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective Corners as a Problematic for Design Interactions 作为设计交互问题的情感角
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-05-15 DOI: 10.1145/3596452
Katherine M. Harrison, Ericka Johnson
{"title":"Affective Corners as a Problematic for Design Interactions","authors":"Katherine M. Harrison, Ericka Johnson","doi":"10.1145/3596452","DOIUrl":"https://doi.org/10.1145/3596452","url":null,"abstract":"Domestic robots are already commonplace in many homes, while humanoid companion robots like Pepper are increasingly becoming part of different kinds of care work. Drawing on fieldwork at a robotics lab, as well as our personal encounters with domestic robots, we use here the metaphor of “hard-to-reach corners” to explore the socio-technical limitations of companion robots and our differing abilities to respond to these limitations. This paper presents “hard-to-reach-corners” as a problematic for design interaction, offering them as an opportunity for thinking about context and intersectional aspects of adaptation.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73757550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信