2011 RO-MAN最新文献

筛选
英文 中文
A considerate care robot able to serve in multi-party settings 一个体贴的护理机器人,能够在多方环境中服务
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005286
Yoshinori Kobayashi, Masahiko Gyoda, T. Tabata, Y. Kuno, K. Yamazaki, Momoyo Shibuya, Yukiko Seki, Akiko Yamazaki
{"title":"A considerate care robot able to serve in multi-party settings","authors":"Yoshinori Kobayashi, Masahiko Gyoda, T. Tabata, Y. Kuno, K. Yamazaki, Momoyo Shibuya, Yukiko Seki, Akiko Yamazaki","doi":"10.1109/ROMAN.2011.6005286","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005286","url":null,"abstract":"This paper introduces a service robot that provides assisted-care, such as serving tea to the elderly in care facilities. In multi-party settings, a robot is required to be able to deal with requests from multiple individuals simultaneously. In particular, when the service robot is concentrating on taking care of a specific person, other people who want to initiate interaction may feel frustrated with the robot. To a considerable extent this may be caused by the robot's behavior, which does not indicate any response to subsequent requests while preoccupied with the first. Therefore, we developed a robot that can project the order of service in a socially acceptable manner to each person who wishes to initiate interaction. In this paper we focus on the task of tea-serving, and introduce a robot able to bring tea to multiple users while accepting multiple requests. The robot can detect a person raising their hand to make a request, and move around people using its mobile functions while avoiding obstacles. When the robot detects a person's request while already serving tea to another person, it projects that it has received the order by indicating “you are next” through a nonverbal action, such as turning its gaze to the person. Because it can project the order of service and indicate its acknowledgement of their requests socially, people will likely feel more satisfied with the robot even when it cannot immediately address their needs. We confirmed the effectiveness of this capability through an experiment in which the robot distributed snacks to participants.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128634156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Effects of responding to, initiating and ensuring joint attention in human-robot interaction 人机交互中共同注意的响应、启动和保证效应
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005230
Chien-Ming Huang, A. Thomaz
{"title":"Effects of responding to, initiating and ensuring joint attention in human-robot interaction","authors":"Chien-Ming Huang, A. Thomaz","doi":"10.1109/ROMAN.2011.6005230","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005230","url":null,"abstract":"Inspired by the developmental timeline of joint attention in humans, we propose a conceptual model of joint attention with three parts: responding to joint attention, initiating joint attention, and ensuring joint attention.We conduct two experiments to investigate effects of joint attention in human-robot interaction. The first experiment explores the effects of responding to joint attention. We show that a robot responding to joint attention improves task performance and is perceived as more competent and socially interactive. The second experiment studies the importance of ensuring joint attention in human-robot interaction.We find that a robot's ensuring joint attention behavior is judged as having better performance in human-robot interactive tasks and is perceived as a natural behavior.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128883934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Generating connection events for human-robot collaboration 为人机协作生成连接事件
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005245
A. Holroyd, C. Rich, C. Sidner, Brett Ponsleur
{"title":"Generating connection events for human-robot collaboration","authors":"A. Holroyd, C. Rich, C. Sidner, Brett Ponsleur","doi":"10.1109/ROMAN.2011.6005245","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005245","url":null,"abstract":"We have developed and tested a reusable Robot Operating System (ROS) module that supports engagement between a human and a humanoid robot by generating appropriate directed gaze, mutual facial gaze, adjacency pair and backchannel connection events. The module implements policies for adding gaze and pointing gestures to referring phrases (including deictic and anaphoric references), performing end-of-turn gazes, responding to human-initiated connection events and maintaining engagement. The module also provides an abstract interface for receiving information from a collaboration manager using the Behavior Markup Language (BML) and exchanges information with our previously developed engagement recognition module.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115752588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Context-aware Bayesian intention estimator using Self-Organizing Map and Petri net 基于自组织映射和Petri网的上下文感知贝叶斯意图估计
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005232
Satoshi Suzuki, F. Harashima
{"title":"Context-aware Bayesian intention estimator using Self-Organizing Map and Petri net","authors":"Satoshi Suzuki, F. Harashima","doi":"10.1109/ROMAN.2011.6005232","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005232","url":null,"abstract":"For intelligent human-machine systems supporting user's operation, prediction of the user behavior and estimation of one's operational intention are required. However, the same high abilities as human being are required for such intelligent machines since human decides own action using advanced complex recognition ability. Therefore, the present authors proposed a Bayesian intention estimator using Self-Organizing Map (SOM). This estimator utilizes a mapping-relation obtained using SOM to find transition of the intentions. In this paper, an improvement of the Bayesian intention estimator is reported by considering the task context. The scenario of whole task is modeled by Petri net, and prediction of belief in Bayesian computation is modified by other probability estimated from the Petri-Net scenario. Applying the presented method to an estimation problem using a remote operation of the radio controlled construction equipments, improvements of the estimator were confirmed; an undetected intention modes were correctly detected, and inadequate identification was corrected with adequate timing.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125952600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Relation between skill acquisition and task specific human speech in collaborative work 协同工作中技能习得与特定任务人类言语的关系
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005198
S. Nakata, Harumi Kobayashi, T. Yasuda, Masafumi Kumata, Satoshi Suzuki, H. Igarashi
{"title":"Relation between skill acquisition and task specific human speech in collaborative work","authors":"S. Nakata, Harumi Kobayashi, T. Yasuda, Masafumi Kumata, Satoshi Suzuki, H. Igarashi","doi":"10.1109/ROMAN.2011.6005198","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005198","url":null,"abstract":"To accomplish the objective to make human-collaborative robots, we need to clarify how humans actually interact each other when they do collaborative work. In this study, we transcribed all utterances produced while participants' completing human-human collaborative conveyer task, and computed and categorized all morphemes (minimal unit of language meaning) using 4 categories based on the morpheme's role in the task. The role categories were Robot Action, User Action, Modifier, Object. We analyzed the utterances produced by 4 groups, 3 participants in each group. Results were that frequency of each category per minute decreased over ten trials. However, the variety of words in each category tended to show an inverted U-shaped pattern. Based on these results, we proposed three stages of language skill acquisition in a collaborative work.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132630811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An augmented reality system for teaching sequential tasks to a household robot 一个增强现实系统,用于向家用机器人教授顺序任务
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005235
Richard Fung, S. Hashimoto, M. Inami, T. Igarashi
{"title":"An augmented reality system for teaching sequential tasks to a household robot","authors":"Richard Fung, S. Hashimoto, M. Inami, T. Igarashi","doi":"10.1109/ROMAN.2011.6005235","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005235","url":null,"abstract":"We present a method of instructing a sequential task to a household robot using a hand-held augmented reality device. The user decomposes a high-level goal such as “prepare a drink” into steps such as delivering a mug under a kettle and pouring hot water into the mug. The user takes a photograph of each step using the device and annotates it with necessary information via touch operation. The resulting sequence of annotated photographs serves as a reference for review and reuse at a later time. We created a working prototype system with various types of robots and appliances.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132651345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Investigating the effects of visual saliency on deictic gesture production by a humanoid robot 研究视觉显著性对人形机器人指示手势产生的影响
2011 RO-MAN Pub Date : 2011-07-01 DOI: 10.1109/ROMAN.2011.6005266
A. Clair, Ross Mead, M. Matarić
{"title":"Investigating the effects of visual saliency on deictic gesture production by a humanoid robot","authors":"A. Clair, Ross Mead, M. Matarić","doi":"10.1109/ROMAN.2011.6005266","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005266","url":null,"abstract":"In many collocated human-robot interaction scenarios, robots are required to accurately and unambiguously indicate an object or point of interest in the environment. Realistic, cluttered environments containing many visually salient targets can present a challenge for the observer of such pointing behavior. In this paper, we describe an experiment and results detailing the effects of visual saliency and pointing modality on human perceptual accuracy of a robot's deictic gestures (head and arm pointing) and compare the results to the perception of human pointing.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131453724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
The function of off-gaze in human-robot interaction 离视在人机交互中的作用
2011 RO-MAN Pub Date : 2011-07-01 DOI: 10.1109/ROMAN.2011.6005271
Sascha Hinte, M. Lohse
{"title":"The function of off-gaze in human-robot interaction","authors":"Sascha Hinte, M. Lohse","doi":"10.1109/ROMAN.2011.6005271","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005271","url":null,"abstract":"When and how do users interrupt the interaction with a robot and turn to the experimenter? Usually it is assumed that experimenters affect the interaction negatively and should ideally not be present at all. However, in interaction situations with autonomous systems and inexperienced users this is often not possible for safety reasons. Thus, the participants indeed at times switch their focus of attention from the robot to the experimenter. Instead of seeing this as something purely negative, we argue that answering the questions of when, why and how this happens actually bears important information about the state of the interaction and the users' understanding of it. Therefore, we analyzed a study conducted in a home tour scenario with this respect and indeed discovered certain situations when the users turned away from the robot and towards the experimenter.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124770611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信