2011 RO-MAN最新文献

筛选
英文 中文
Towards a typology of meaningful signals and cues in social robotics 走向社交机器人中有意义的信号和线索的类型学
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005246
F. Hegel, Sebastian Gieselmann, Annika Peters, Patrick Holthaus, B. Wrede
{"title":"Towards a typology of meaningful signals and cues in social robotics","authors":"F. Hegel, Sebastian Gieselmann, Annika Peters, Patrick Holthaus, B. Wrede","doi":"10.1109/ROMAN.2011.6005246","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005246","url":null,"abstract":"In this paper, we present a first step towards a typology of relevant signals and cues in human-robot interaction (HRI). In human as well as in animal communication systems, signals and cues play an important role for senders and receivers of such signs. In our typology, we systematically distinguish between a robot's signals and cues which are either designed to be human-like or artificial to create meaningful information. Subsequently, developers and designers should be aware of which signs affect a user's judgements on social robots. For this reason, we first review several signals and cues that have already been successfully used in HRI with regard to our typology. Second, we discuss crucial human-like and artificial cues which have so far not been considered in the design of social robots - although they are highly likely to affect a user's judgement of social robots.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128575958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
A considerate care robot able to serve in multi-party settings 一个体贴的护理机器人,能够在多方环境中服务
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005286
Yoshinori Kobayashi, Masahiko Gyoda, T. Tabata, Y. Kuno, K. Yamazaki, Momoyo Shibuya, Yukiko Seki, Akiko Yamazaki
{"title":"A considerate care robot able to serve in multi-party settings","authors":"Yoshinori Kobayashi, Masahiko Gyoda, T. Tabata, Y. Kuno, K. Yamazaki, Momoyo Shibuya, Yukiko Seki, Akiko Yamazaki","doi":"10.1109/ROMAN.2011.6005286","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005286","url":null,"abstract":"This paper introduces a service robot that provides assisted-care, such as serving tea to the elderly in care facilities. In multi-party settings, a robot is required to be able to deal with requests from multiple individuals simultaneously. In particular, when the service robot is concentrating on taking care of a specific person, other people who want to initiate interaction may feel frustrated with the robot. To a considerable extent this may be caused by the robot's behavior, which does not indicate any response to subsequent requests while preoccupied with the first. Therefore, we developed a robot that can project the order of service in a socially acceptable manner to each person who wishes to initiate interaction. In this paper we focus on the task of tea-serving, and introduce a robot able to bring tea to multiple users while accepting multiple requests. The robot can detect a person raising their hand to make a request, and move around people using its mobile functions while avoiding obstacles. When the robot detects a person's request while already serving tea to another person, it projects that it has received the order by indicating “you are next” through a nonverbal action, such as turning its gaze to the person. Because it can project the order of service and indicate its acknowledgement of their requests socially, people will likely feel more satisfied with the robot even when it cannot immediately address their needs. We confirmed the effectiveness of this capability through an experiment in which the robot distributed snacks to participants.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128634156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Target person identification and following based on omnidirectional camera and LRF data fusion 基于全向相机与LRF数据融合的目标人识别与跟踪
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005248
Mehrez Kristou, A. Ohya, S. Yuta
{"title":"Target person identification and following based on omnidirectional camera and LRF data fusion","authors":"Mehrez Kristou, A. Ohya, S. Yuta","doi":"10.1109/ROMAN.2011.6005248","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005248","url":null,"abstract":"In this paper, we present the current progress of our approach to identify and follow a target person for a service robot application. The robot is equipped with LRF and Omni directional camera. Our approach is based on multi-sensor fusion in which a person is identified using the panoramic image and tracked using the LRF. The selection of the target person is implemented to improve the identification when multiple candidates are detected. Our approach is successfully implemented on a mobile robot. A simplified target person following behavior is implemented to focus on the proposed method's efficiency. Several experiments are conducted and showed the effectiveness of our approach to identify and follow human in indoor environments.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114640764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Extraction of tacit knowledge as expert engineer's skill based on mixed human sensing 基于混合人感知的专家工程师隐性知识提取技术
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005234
H. Hashimoto, Ikuyo Yoshida, Y. Teramoto, Hideki Tabata, Chao Han
{"title":"Extraction of tacit knowledge as expert engineer's skill based on mixed human sensing","authors":"H. Hashimoto, Ikuyo Yoshida, Y. Teramoto, Hideki Tabata, Chao Han","doi":"10.1109/ROMAN.2011.6005234","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005234","url":null,"abstract":"We present a mixed human sensing method to extract tacit knowledge of skillful operation from expert engineers in the manufacturing industry. The method that consists of the field-oriented interview, the human motion capture and the video analysis is applied for experts and beginners, and the operational differences between those are analyzed. By returning and confirming the results of the analysis to the subjects, tacit knowledge is effectively extracted. Through the examinations applying the method to subjects operating in an industrial factory, the effectiveness of the method is shown.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130115165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Relation between skill acquisition and task specific human speech in collaborative work 协同工作中技能习得与特定任务人类言语的关系
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005198
S. Nakata, Harumi Kobayashi, T. Yasuda, Masafumi Kumata, Satoshi Suzuki, H. Igarashi
{"title":"Relation between skill acquisition and task specific human speech in collaborative work","authors":"S. Nakata, Harumi Kobayashi, T. Yasuda, Masafumi Kumata, Satoshi Suzuki, H. Igarashi","doi":"10.1109/ROMAN.2011.6005198","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005198","url":null,"abstract":"To accomplish the objective to make human-collaborative robots, we need to clarify how humans actually interact each other when they do collaborative work. In this study, we transcribed all utterances produced while participants' completing human-human collaborative conveyer task, and computed and categorized all morphemes (minimal unit of language meaning) using 4 categories based on the morpheme's role in the task. The role categories were Robot Action, User Action, Modifier, Object. We analyzed the utterances produced by 4 groups, 3 participants in each group. Results were that frequency of each category per minute decreased over ten trials. However, the variety of words in each category tended to show an inverted U-shaped pattern. Based on these results, we proposed three stages of language skill acquisition in a collaborative work.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132630811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An augmented reality system for teaching sequential tasks to a household robot 一个增强现实系统,用于向家用机器人教授顺序任务
2011 RO-MAN Pub Date : 2011-08-30 DOI: 10.1109/ROMAN.2011.6005235
Richard Fung, S. Hashimoto, M. Inami, T. Igarashi
{"title":"An augmented reality system for teaching sequential tasks to a household robot","authors":"Richard Fung, S. Hashimoto, M. Inami, T. Igarashi","doi":"10.1109/ROMAN.2011.6005235","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005235","url":null,"abstract":"We present a method of instructing a sequential task to a household robot using a hand-held augmented reality device. The user decomposes a high-level goal such as “prepare a drink” into steps such as delivering a mug under a kettle and pouring hot water into the mug. The user takes a photograph of each step using the device and annotates it with necessary information via touch operation. The resulting sequence of annotated photographs serves as a reference for review and reuse at a later time. We created a working prototype system with various types of robots and appliances.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132651345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Investigating the effects of visual saliency on deictic gesture production by a humanoid robot 研究视觉显著性对人形机器人指示手势产生的影响
2011 RO-MAN Pub Date : 2011-07-01 DOI: 10.1109/ROMAN.2011.6005266
A. Clair, Ross Mead, M. Matarić
{"title":"Investigating the effects of visual saliency on deictic gesture production by a humanoid robot","authors":"A. Clair, Ross Mead, M. Matarić","doi":"10.1109/ROMAN.2011.6005266","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005266","url":null,"abstract":"In many collocated human-robot interaction scenarios, robots are required to accurately and unambiguously indicate an object or point of interest in the environment. Realistic, cluttered environments containing many visually salient targets can present a challenge for the observer of such pointing behavior. In this paper, we describe an experiment and results detailing the effects of visual saliency and pointing modality on human perceptual accuracy of a robot's deictic gestures (head and arm pointing) and compare the results to the perception of human pointing.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131453724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
The function of off-gaze in human-robot interaction 离视在人机交互中的作用
2011 RO-MAN Pub Date : 2011-07-01 DOI: 10.1109/ROMAN.2011.6005271
Sascha Hinte, M. Lohse
{"title":"The function of off-gaze in human-robot interaction","authors":"Sascha Hinte, M. Lohse","doi":"10.1109/ROMAN.2011.6005271","DOIUrl":"https://doi.org/10.1109/ROMAN.2011.6005271","url":null,"abstract":"When and how do users interrupt the interaction with a robot and turn to the experimenter? Usually it is assumed that experimenters affect the interaction negatively and should ideally not be present at all. However, in interaction situations with autonomous systems and inexperienced users this is often not possible for safety reasons. Thus, the participants indeed at times switch their focus of attention from the robot to the experimenter. Instead of seeing this as something purely negative, we argue that answering the questions of when, why and how this happens actually bears important information about the state of the interaction and the users' understanding of it. Therefore, we analyzed a study conducted in a home tour scenario with this respect and indeed discovered certain situations when the users turned away from the robot and towards the experimenter.","PeriodicalId":408015,"journal":{"name":"2011 RO-MAN","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124770611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信