ACM Transactions on Human-Robot Interaction最新文献

筛选
英文 中文
Visuo-Textual Explanations of a Robot's Navigational Choices 机器人导航选择的视觉文本解释
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580141
Amar Halilovic, F. Lindner
{"title":"Visuo-Textual Explanations of a Robot's Navigational Choices","authors":"Amar Halilovic, F. Lindner","doi":"10.1145/3568294.3580141","DOIUrl":"https://doi.org/10.1145/3568294.3580141","url":null,"abstract":"With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"96 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76664176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Variable Autonomy for Human-Robot Teaming (VAT) 人-机器人团队(VAT)的可变自治
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3579957
Manolis Chiou, S. Booth, Bruno Lacerda, Andreas Theodorou, S. Rothfuss
{"title":"Variable Autonomy for Human-Robot Teaming (VAT)","authors":"Manolis Chiou, S. Booth, Bruno Lacerda, Andreas Theodorou, S. Rothfuss","doi":"10.1145/3568294.3579957","DOIUrl":"https://doi.org/10.1145/3568294.3579957","url":null,"abstract":"As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in on out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy. This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop's goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and long-term research goals for the community. To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"223 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76914450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-Drone Interaction: Interacting with People Smoking in Prohibited Areas 人-无人机互动:与禁区内吸烟人群互动
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580173
Yermakhan Kassym, Saparkhan Kassymbekov, Kamila Zhumakhanova, A. Sandygulova
{"title":"Human-Drone Interaction: Interacting with People Smoking in Prohibited Areas","authors":"Yermakhan Kassym, Saparkhan Kassymbekov, Kamila Zhumakhanova, A. Sandygulova","doi":"10.1145/3568294.3580173","DOIUrl":"https://doi.org/10.1145/3568294.3580173","url":null,"abstract":"Drones are continually entering our daily lives by being used in a number of different applications. This creates a natural demand for better interaction ways between humans and drones. One of the possible applications that would benefit from improved interaction is the inspection of smoking in prohibited areas. We propose our own gesture of drone flight that we believe would deliver the message \"not to smoke\" better than the ready-made built-in gesture. To this end, we conducted a within-subject experiment involving 19 participants, where we evaluated the gestures on a drone operated through the Wizard-of-Oz interaction design. The results demonstrate that the proposed gesture was better at conveying the message compared to the built-in gesture.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"18 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79818653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HighLight 突出
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.5040/9781350088733.0124
Alessandro Cabrio, Negin Hashmati, Philip Rabia, Liina Tumma, Hugo Wärnberg, Sjoerd Hendriks, Mohammad Obaid
{"title":"HighLight","authors":"Alessandro Cabrio, Negin Hashmati, Philip Rabia, Liina Tumma, Hugo Wärnberg, Sjoerd Hendriks, Mohammad Obaid","doi":"10.5040/9781350088733.0124","DOIUrl":"https://doi.org/10.5040/9781350088733.0124","url":null,"abstract":"","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79916250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stretch to the Client; Re-imagining Interfaces 延伸到客户端;一次接口
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580212
Kay N. Wojtowicz, M. E. Cabrera
{"title":"Stretch to the Client; Re-imagining Interfaces","authors":"Kay N. Wojtowicz, M. E. Cabrera","doi":"10.1145/3568294.3580212","DOIUrl":"https://doi.org/10.1145/3568294.3580212","url":null,"abstract":"This paper presents the efforts made towards the creation of a client interface to be used with Hello-Robot Stretch. The goal is to create an interface that is accessible to allow for the best user experience. This interface enables users to control Stretch with basic commands through several modalities. To make this interface accessible, a simple and clear web interface was crafted so users of differing abilities can successfully interact with Stretch. A voice activated option was also added to further increase the range of possible interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"5 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80542595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical Development of a Robot to Assist Cognitive Reconstruction in Psychiatric Day Care 辅助精神科日间护理认知重建机器人的实用开发
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580150
Takuto Akiyoshi, H. Sumioka, Hirokazu Kumazaki, Junya Nakanishi, Hirokazu Kato, M. Shiomi
{"title":"Practical Development of a Robot to Assist Cognitive Reconstruction in Psychiatric Day Care","authors":"Takuto Akiyoshi, H. Sumioka, Hirokazu Kumazaki, Junya Nakanishi, Hirokazu Kato, M. Shiomi","doi":"10.1145/3568294.3580150","DOIUrl":"https://doi.org/10.1145/3568294.3580150","url":null,"abstract":"One of the important roles of social robots is to support mental health through conversations with people. In this study, we focused on the column method to support cognitive restructuring, which is also used as one of the programs in psychiatric day care, and to help patients think flexibly and understand their own characteristics. To develop a robot that assists psychiatric day care patients in organizing their thoughts about their worries and goals through conversation, we designed the robot's conversation content based on the column method and implemented its autonomous conversation function. This paper reports on the preliminary experiments conducted to evaluate and improve the effectiveness of this prototype system in an actual psychiatric day care setting, and on the comments from participants in the experiments and day care staff.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89880500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdsourcing Task Traces for Service Robotics 服务机器人的众包任务轨迹
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580112
David J. Porfirio, Allison Sauppé, M. Cakmak, Aws Albarghouthi, Bilge Mutlu
{"title":"Crowdsourcing Task Traces for Service Robotics","authors":"David J. Porfirio, Allison Sauppé, M. Cakmak, Aws Albarghouthi, Bilge Mutlu","doi":"10.1145/3568294.3580112","DOIUrl":"https://doi.org/10.1145/3568294.3580112","url":null,"abstract":"Demonstration is an effective end-user development paradigm for teaching robots how to perform new tasks. In this paper, we posit that demonstration is useful not only as a teaching tool, but also as a way to understand and assist end-user developers in thinking about a task at hand. As a first step toward gaining this understanding, we constructed a lightweight web interface to crowdsource step-by-step instructions of common household tasks, leveraging the imaginations and past experiences of potential end-user developers. As evidence of the utility of our interface, we deployed the interface on Amazon Mechanical Turk and collected 207 task traces that span 18 different task categories. We describe our vision for how these task traces can be operationalized as task models within end-user development tools and provide a roadmap for future work.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"36 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90257123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed Reality-based Exergames for Upper Limb Robotic Rehabilitation 基于混合现实的上肢机器人康复运动游戏
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580124
Nadia Vanessa Garcia Hernandez, S. Buccelli, M. Laffranchi, L. D. De Michieli
{"title":"Mixed Reality-based Exergames for Upper Limb Robotic Rehabilitation","authors":"Nadia Vanessa Garcia Hernandez, S. Buccelli, M. Laffranchi, L. D. De Michieli","doi":"10.1145/3568294.3580124","DOIUrl":"https://doi.org/10.1145/3568294.3580124","url":null,"abstract":"Robotic rehabilitation devices are showing strong potential for intensive, task-oriented, and personalized motor training. Integrating Mixed Reality (MR) technology and tangible objects in these systems allow the creation of attractive, stimulating, and personalized hybrid environments. Using a gamification approach, MR-based robotic training can increase patients' motivation, engagement, and experience. This paper presents the development of two Mixed Reality-based exergames to perform bimanual exercises assisted by a shoulder rehabilitation exoskeleton and using tangible objects. The system design was completed by adopting a user-centered iterative process. The system evaluates task performance and cost function metrics from the kinematic analysis of the hands' movement. A preliminary evaluation of the system is presented, which shows the correct operation of the system and the fact that it stimulates the desired upper limb movements.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"7 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86641984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer Learning of Human Preferences for Proactive Robot Assistance in Assembly Tasks 主动机器人协助装配任务中人类偏好的迁移学习
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568162.3576965
Heramb Nemlekar, N. Dhanaraj, Angelos Guan, S. Gupta, S. Nikolaidis
{"title":"Transfer Learning of Human Preferences for Proactive Robot Assistance in Assembly Tasks","authors":"Heramb Nemlekar, N. Dhanaraj, Angelos Guan, S. Gupta, S. Nikolaidis","doi":"10.1145/3568162.3576965","DOIUrl":"https://doi.org/10.1145/3568162.3576965","url":null,"abstract":"We focus on enabling robots to proactively assist humans in assembly tasks by adapting to their preferred sequence of actions. Much work on robot adaptation requires human demonstrations of the task. However, human demonstrations of real-world assemblies can be tedious and time-consuming. Thus, we propose learning human preferences from demonstrations in a shorter, canonical task to predict user actions in the actual assembly task. The proposed system uses the preference model learned from the canonical task as a prior and updates the model through interaction when predictions are inaccurate. We evaluate the proposed system in simulated assembly tasks and in a real-world human-robot assembly study and we show that both transferring the preference model from the canonical task, as well as updating the model online, contribute to improved accuracy in human action prediction. This enables the robot to proactively assist users, significantly reduce their idle time, and improve their experience working with the robot, compared to a reactive robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"96 2 Pt 1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89515596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multimodal Dataset for Robot Learning to Imitate Social Human-Human Interaction 机器人学习模仿社会人际互动的多模态数据集
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580080
Nguyen Tan Viet Tuyen, A. Georgescu, Irene Di Giulio, O. Çeliktutan
{"title":"A Multimodal Dataset for Robot Learning to Imitate Social Human-Human Interaction","authors":"Nguyen Tan Viet Tuyen, A. Georgescu, Irene Di Giulio, O. Çeliktutan","doi":"10.1145/3568294.3580080","DOIUrl":"https://doi.org/10.1145/3568294.3580080","url":null,"abstract":"Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI - Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human inter- actions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a benchmark for HRI and multimodal learning research for modelling intra- and interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"14 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82930853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信