ACM Transactions on Human-Robot Interaction最新文献

筛选
英文 中文
Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification 探索机器人声音的美学策略:运动声音的复杂性与物质性
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-17 DOI: 10.1145/3585277
A. Latupeirissa, C. Panariello, R. Bresin
{"title":"Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification","authors":"A. Latupeirissa, C. Panariello, R. Bresin","doi":"10.1145/3585277","DOIUrl":"https://doi.org/10.1145/3585277","url":null,"abstract":"This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models. We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement. We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study. Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"19 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91362764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fielded Human-Robot Interaction for a Heterogeneous Team in the DARPA Subterranean Challenge DARPA地下挑战赛中异质团队的现场人机交互
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-16 DOI: 10.1145/3588325
Danny G. Riley, E. Frew
{"title":"Fielded Human-Robot Interaction for a Heterogeneous Team in the DARPA Subterranean Challenge","authors":"Danny G. Riley, E. Frew","doi":"10.1145/3588325","DOIUrl":"https://doi.org/10.1145/3588325","url":null,"abstract":"Human supervision of multiple fielded robots is a challenging task which requires a thoughtful design and implementation of both the underlying infrastructure and the human interface. It also requires a skilled human able to manage the workload and understand when to trust the autonomy, or manually intervene. We present an end-to-end system for human-robot interaction with a heterogeneous team of robots in complex, communication-limited environments. The system includes the communication infrastructure, autonomy interaction, and human interface elements. Results of the DARPA Subterranean Challenge Final Systems Competition are presented as a case study of the design and analyze the shortcomings of the system.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":"1 - 24"},"PeriodicalIF":5.1,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89329550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective Robots Need Therapy 情感机器人需要治疗
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-15 DOI: 10.1145/3543514
Paul Bucci, David Marino, Ivan Beschastnikh
{"title":"Affective Robots Need Therapy","authors":"Paul Bucci, David Marino, Ivan Beschastnikh","doi":"10.1145/3543514","DOIUrl":"https://doi.org/10.1145/3543514","url":null,"abstract":"Emotion researchers have begun to converge on the theory that emotions are psychologically and socially constructed. A common assumption in affective robotics is that emotions are categorical brain-body states that can be confidently modeled. But if emotions are constructed, then they are interpretive, ambiguous, and specific to an individual’s unique experience. Constructivist views of emotion pose several challenges to affective robotics: first, it calls into question the validity of attempting to obtain objective measures of emotion through rating scales or biometrics. Second, ambiguous subjective data poses a challenge to computational systems that need structured and definite data to operate. How can a constructivist view of emotion be rectified with these challenges? In this article, we look to psychotherapy for ontological, epistemic, and methodological guidance. These fields (1) already understand emotions to be intrinsically embodied, relative, and metaphorical and (2) have built up substantial knowledge informed by everyday practice. It is our hope that by using interpretive methods inspired by therapeutic approaches, HRI researchers will be able to focus on the practicalities of designing effective embodied emotional interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"45 1","pages":"1 - 22"},"PeriodicalIF":5.1,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86531150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing a Robot which Touches the User's Head with Intra-Hug Gestures 设计一个用拥抱手势触摸用户头部的机器人
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580096
Yuya Onishi, H. Sumioka, M. Shiomi
{"title":"Designing a Robot which Touches the User's Head with Intra-Hug Gestures","authors":"Yuya Onishi, H. Sumioka, M. Shiomi","doi":"10.1145/3568294.3580096","DOIUrl":"https://doi.org/10.1145/3568294.3580096","url":null,"abstract":"There are a lot of positive benefits of hugging, and several studies have applied its application in human-robot interaction. However, due to the limitation of a robot performance, these robots only touched the human's back. In this study, we developed a hug robot, named \"Moffuly-II.\" This robot can hug not only with intra-hug gestures, but also touch the user's back or head. This paper describes the robot system and the user's impression of hug with the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"49 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90227931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making Music More Inclusive with Hospiano Hospiano让音乐更具包容性
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580184
Chacharin Lertyosbordin, Nichaput Khurukitwanit, Teeratas Asavareongchai, Sirin Liukasemsarn
{"title":"Making Music More Inclusive with Hospiano","authors":"Chacharin Lertyosbordin, Nichaput Khurukitwanit, Teeratas Asavareongchai, Sirin Liukasemsarn","doi":"10.1145/3568294.3580184","DOIUrl":"https://doi.org/10.1145/3568294.3580184","url":null,"abstract":"Music brings people together; it is a universal language that can help us be more expressive and help us understand our feelings and emotions in a better manner. The \"Hospiano\" robot is a prototype developed with the goal of making music accessible to all, regardless of physical ability. The robot acts as a pianist and can be placed in hospital lobbies and wards, playing the piano in response to the gestures and facial expressions of patients (i.e. head movement, eye and mouth movement, and proximity). It has three main modes of operation: \"Robot Pianist mode\", in which it plays pre-existing songs; \"Play Along mode\", which allows anyone to interact with the music; and \"Composer mode\", which allows patients to create their own music. The software that controls the prototype's actions runs on the Robot Operating System (ROS). It has been proven that humans and robots can interact fluently via a robot's vision, which opens up a wide range of possibilities for further interactions between these logical machines and more emotive beings like humans, resulting in an improvement in the quality of life of people who use it, increased inclusivity, and a better world for future generations to live in.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90366005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
People Dynamically Update Trust When Interactively Teaching Robots 交互式教学机器人时,人们动态更新信任
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568162.3576962
V. B. Chi, B. Malle
{"title":"People Dynamically Update Trust When Interactively Teaching Robots","authors":"V. B. Chi, B. Malle","doi":"10.1145/3568162.3576962","DOIUrl":"https://doi.org/10.1145/3568162.3576962","url":null,"abstract":"Human-robot trust research often measures people's trust in robots in individual scenarios. However, humans may update their trust dynamically as they continuously interact with a robot. In a well-powered study (n = 220), we investigate the trust updating process across a 15-trial interaction. In a novel paradigm, participants act in the role of teacher to a simulated robot on a smartphone-based platform, and we assess trust at multiple levels (momentary trust feelings, perceptions of trustworthiness, and intended reliance). Results reveal that people are highly sensitive to the robot's learning progress trial by trial: they take into account both previous-task performance, current-task difficulty, and cumulative learning across training. More integrative perceptions of robot trustworthiness steadily grow as people gather more evidence from observing robot performance, especially of faster-learning robots. Intended reliance on the robot in novel tasks increased only for faster-learning robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"94 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91039338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Towards Robot Learning from Spoken Language 从口语中学习机器人
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580053
K. Kodur, Manizheh Zand, Maria Kyrarini
{"title":"Towards Robot Learning from Spoken Language","authors":"K. Kodur, Manizheh Zand, Maria Kyrarini","doi":"10.1145/3568294.3580053","DOIUrl":"https://doi.org/10.1145/3568294.3580053","url":null,"abstract":"The paper proposes a robot learning framework that empowers a robot to automatically generate a sequence of actions from unstructured spoken language. The robot learning framework was able to distinguish between instructions and unrelated conversations. Data were collected from 25 participants, who were asked to instruct the robot to perform a collaborative cooking task while being interrupted and distracted. The system was able to identify the sequence of instructed actions for a cooking task with an accuracy of of 92.85 ± 3.87%.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"4 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87606085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Understanding Differences in Human-Robot Teaming Dynamics between Deaf/Hard of Hearing and Hearing Individuals 理解聋人/重听人与正常人之间人-机器人团队动态的差异
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580146
A'di Dust, Carola Gonzalez-Lebron, Shannon Connell, Saurav Singh, Reynold Bailey, Cecilia Ovesdotter Alm, Jamison Heard
{"title":"Understanding Differences in Human-Robot Teaming Dynamics between Deaf/Hard of Hearing and Hearing Individuals","authors":"A'di Dust, Carola Gonzalez-Lebron, Shannon Connell, Saurav Singh, Reynold Bailey, Cecilia Ovesdotter Alm, Jamison Heard","doi":"10.1145/3568294.3580146","DOIUrl":"https://doi.org/10.1145/3568294.3580146","url":null,"abstract":"With the development of industry 4.0, more collaborative robots are being implemented in manufacturing environments. Hence, research in human-robot interaction (HRI) and human-cobot interaction (HCI) is gaining traction. However, the design of how cobots interact with humans has typically focused on the general able-bodied population, and these interactions are sometimes ineffective for specific groups of users. This study's goal is to identify interactive differences between hearing and deaf and hard of hearing individuals when interacting with cobots. Understanding these differences may promote inclusiveness by detecting ineffective interactions, reasoning why an interaction failed, and adapting the framework's interaction strategy appropriately.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"19 1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88051400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Who to Teach a Robot to Facilitate Multi-party Social Interactions? 谁教机器人促进多方社会互动?
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580056
Jouh Yeong Chew, Keisuke Nakamura
{"title":"Who to Teach a Robot to Facilitate Multi-party Social Interactions?","authors":"Jouh Yeong Chew, Keisuke Nakamura","doi":"10.1145/3568294.3580056","DOIUrl":"https://doi.org/10.1145/3568294.3580056","url":null,"abstract":"One salient function of social robots is to play the role of facilitator to enhance the harmony state of multi-party social interactions so that every human participant is encouraged and motivated to engage actively. However, it is challenging to handcraft the behavior of social robots to achieve this objective. One promising approach is for the robot to learn from human teachers. This paper reports the findings of an empirical test to determine the optimal experiment condition for a robot to learn verbal and nonverbal strategies to facilitate a multi-party interaction. First, the modified L8 Orthogonal Array (OA) is used to design a fractional factorial experiment condition using factors like the type of human facilitator, group size and stimulus type. The response of OA is the harmony state explicitly defined using the speech turn-taking between speakers and represented using metrics extracted from the first order Markov transition matrix. Analyses of Main Effects and ANOVA suggest the type of human facilitator and group size are significant factors affecting the harmony state. Therefore, we propose the optimal experiment condition to train a facilitator robot using high school teachers as human teachers and group size larger than four participants.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"7 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89012519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception-Intention-Action Cycle as a Human Acceptable Way for Improving Human-Robot Collaborative Tasks 感知-意图-行动循环是改善人机协作任务的人类可接受方式
IF 5.1
ACM Transactions on Human-Robot Interaction Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580149
J. E. Domínguez-Vidal, Nicolás Rodríguez, A. Sanfeliu
{"title":"Perception-Intention-Action Cycle as a Human Acceptable Way for Improving Human-Robot Collaborative Tasks","authors":"J. E. Domínguez-Vidal, Nicolás Rodríguez, A. Sanfeliu","doi":"10.1145/3568294.3580149","DOIUrl":"https://doi.org/10.1145/3568294.3580149","url":null,"abstract":"In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human's intention a key role at the same level of the robot's perception and not as a subblock of this. Although part of the human's intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot's perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"8 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89696048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信