Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication最新文献

筛选
英文 中文
Robotic receptionist ASKA : a research platform for human-robot interaction 机器人接待员ASKA:人机交互研究平台
J. Ido, K. Takemura, Y. Matsumoto, T. Ogasawara
{"title":"Robotic receptionist ASKA : a research platform for human-robot interaction","authors":"J. Ido, K. Takemura, Y. Matsumoto, T. Ogasawara","doi":"10.1109/ROMAN.2002.1045640","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045640","url":null,"abstract":"Intelligent robots will make a chance for us to use a computer in our daily life. We implemented a humanoid robot, ASKA, in our university reception desk for the computerized university guidance. This paper describes the hardware and software system of ASKA. ASKA can detect a user in front of the reception desk using a stereo camera system attached to the head, and recognize the user's question utterance. It can answer the question by its synthesized voice with hand gestures and head movements.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128867924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Conversational agent who achieves tasks while interacting with humans based on scenarios 会话代理,在基于场景与人类交互时完成任务
S. Takata, S. Kawato, K. Mase
{"title":"Conversational agent who achieves tasks while interacting with humans based on scenarios","authors":"S. Takata, S. Kawato, K. Mase","doi":"10.1109/ROMAN.2002.1045628","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045628","url":null,"abstract":"Many interaction-oriented robots have been proposed as participants in human society. These agents, however, are not committed to achieving tasks. To become our partners, these agents should not only interact with humans but also achieve tasks with humans. In this paper, we propose a conversational agent who achieves tasks while interacting with humans based on scenarios. To realize this type of agent, we develop an enhanced BDI architecture by effectively integrating the research results of interaction-oriented and task-oriented agents. We show the effectiveness of the proposed agent by applying it to a conversational agent, called Photo-agent, who takes photographs while interacting with users.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125330083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Children's cognitive abilities in construction and programming robots 儿童在机器人构建和编程中的认知能力
B. Caci, A. D'Amico
{"title":"Children's cognitive abilities in construction and programming robots","authors":"B. Caci, A. D'Amico","doi":"10.1109/ROMAN.2002.1045620","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045620","url":null,"abstract":"The aim of the present pilot study, that involved a group of eleven-years-old Italian subjects, is to study the relationship between non-verbal intelligence, visual-constructive ability, logical reasoning, and the abilities in construction and behavioral programming of LEGO Mindstorm Robots. As expected, results showed that children with high level in the cognitive tests are more able titan others both in the robot body construction and in the robot behavioral programming. Implications for future research are emphasized.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117053396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Generalized facial expression of character face based on deformation model for human-robot communication 基于变形模型的人物脸广义面部表情人机交流
T. Fukuda, M. Nakashima, F. Arai, Y. Hasegawa
{"title":"Generalized facial expression of character face based on deformation model for human-robot communication","authors":"T. Fukuda, M. Nakashima, F. Arai, Y. Hasegawa","doi":"10.1109/ROMAN.2002.1045644","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045644","url":null,"abstract":"Communication between a computer system and a user is quite important. For communication, a facial expression of robot will be effective to show an internal state of the robot from the mental aspect. If a system has its own face with appropriate facial expression, it can be also used as the communication interpreter. In this paper, we focus on the facial expression for the computer system. We have developed the artificial face by both software and hardware as a communication tool. We consider that the recognition rate of the artificial face by the user is important for a real communication. So, we used the \"Character Face (CF)\" as an information terminal and some basic facial expressions were expressed using the deformation technique. We proposed a normalization technique to apply the facial expression to the various different faces easily. We developed the facial expression system by the 3D computer graphics (CG) and evaluated the recognition rate of their expressions. The recognition rate was improved by the deformation-based approach and this method can be applied to the various different faces by the normalization technique.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"47 46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124720615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Sound focusing technology using parametric effect with beat signal 利用参数效应与节拍信号的声音聚焦技术
L. Ishimaru, R. Hyoudou
{"title":"Sound focusing technology using parametric effect with beat signal","authors":"L. Ishimaru, R. Hyoudou","doi":"10.1109/ROMAN.2002.1045635","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045635","url":null,"abstract":"The purpose of this research is to focus sound into an arbitrary point, so that only a specific person can hear the information. We propose a new sound focusing method that uses a parametric effect by 2 kinds of ultrasonic speakers. From one speaker, ultrasonics are transmitted as carriers. From the other speaker, ultrasonics including audible sound by balanced modulation are transmitted. In the area where the 2 kinds of signal intersect, the ultrasonics are modulated like amplitude modulation by a beat signal. Also, by the parametric effect, audible sound is demodulated in air. So, only the person who stands around the intersection point can hear the sound. In this paper, we discuss the principle of the proposed method and experimental results.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117346893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Person tracking with a mobile robot based on multi-modal anchoring 基于多模态锚定的移动机器人人跟踪
Marcus Kleinehagenbrock, S. Lang, J. Fritsch, Frank Lömker, Gernot A. Fink, G. Sagerer
{"title":"Person tracking with a mobile robot based on multi-modal anchoring","authors":"Marcus Kleinehagenbrock, S. Lang, J. Fritsch, Frank Lömker, Gernot A. Fink, G. Sagerer","doi":"10.1109/ROMAN.2002.1045659","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045659","url":null,"abstract":"The ability to robustly track a person is an important prerequisite for human-robot-interaction. This paper presents a hybrid approach for integrating vision and laser range data to track a human. The legs of a person can be extracted from laser range data while skin-colored faces are detectable in camera images showing the upper body part of a person. As these algorithms provide different percepts originating from the same person, the perceptual results have to be combined. We link the percepts to their symbolic counterparts legs and face by anchoring processes as defined by Coradeschi and Saffiotti. To anchor the composite symbol person we extend the anchoring framework with a fusion module integrating the individual anchors. This allows to deal with perceptual algorithms having different spatio-temporal properties and provides a structured way for integrating anchors from multiple modalities. An example with a mobile robot tracking a person demonstrates the performance of our approach.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116945490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 107
Dependable multimodal communication and interaction with robotic assistants 与机器人助手进行可靠的多模式通信和交互
R. Bischoff, V. Graefe
{"title":"Dependable multimodal communication and interaction with robotic assistants","authors":"R. Bischoff, V. Graefe","doi":"10.1109/ROMAN.2002.1045639","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045639","url":null,"abstract":"To advance research in the field of multimodal human-robot communication we designed and built the humanoid robot Hermes. Equipped with an omnidirectional undercarriage and two manipulator arms it combines visual, kinesthetic, tactile, and auditory sensing with natural spoken language input and output and body expressions for natural communication and interaction with humans. Hermes was successfully tested in an extended six-month experiment in a museum where only naive users interacted with the robot. They chatted with Hermes in several languages and requested various services. Multimodal communication and an understanding of the current situation by the robot turned out to be the key to success.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"87 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132971461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Design spaces and niche spaces of believable social robots 为可信的社交机器人设计空间和壁龛空间
K. Dautenhahn
{"title":"Design spaces and niche spaces of believable social robots","authors":"K. Dautenhahn","doi":"10.1109/ROMAN.2002.1045621","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045621","url":null,"abstract":"This paper discusses the design space of believable social robots. We synthesise ideas and concepts from areas as diverse as comics design and rehabilitation robotics. First, we revisit the work of the Japanese researcher Masahiro Mori in the context of recent developments in social robots. Next, we discuss work in the arts into comics design, an area which has dealt for decades with the problem of creating believable characters. Finally, in order to illustrate some of the important issues involved we focus on a particular application area: the use of interactive robots in autism therapy, work that is carried out in the Aurora project. We discuss design issues of social robots in the context of 'design spaces' and 'niche spaces', concepts that have been defined originally for intelligent agent architectures but which, we propose, can be highly valuable for social robotics design. This paper is meant to open up a discussion towards a systematic exploration of design spaces and niche spaces of social robots.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"63 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132193862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 126
Simultaneous measurement of the grip/load force and the finger EMG: Effects of the grasping condition 握持力/负载力与手指肌电图的同步测量:握持条件的影响
Y. Kurita, M. Tada, Y. Matsumoto, T. Ogasawara
{"title":"Simultaneous measurement of the grip/load force and the finger EMG: Effects of the grasping condition","authors":"Y. Kurita, M. Tada, Y. Matsumoto, T. Ogasawara","doi":"10.1109/ROMAN.2002.1045625","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045625","url":null,"abstract":"Human has efficient grasping strategies, such as the anticipatory force programming, using the somatic sensation of the various properties of the object. However, it is not clear how the object's weight and grasping condition affect the motor outputs during the grasping motion. In this paper, the grip/load force and the surface electromyography (EMG) of abductor pollicis brevis muscle (AbPB) and adductor pollicis muscle (AdP) are measured simultaneously to investigate the relation between the object's weight and the motor outputs. The experimental results show that the changes of the weight affect the initial peak grip force. However, the grasping condition affects the motor outputs in the precontact phase measured by the AbPB EMG. These results suggest that the anticipatory force programming largely depends on not only the object's properties but also the grasping condition.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132220925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Detecting the gaze direction for a man machine interface 检测人机界面的注视方向
C. Theis, K. Hustadt
{"title":"Detecting the gaze direction for a man machine interface","authors":"C. Theis, K. Hustadt","doi":"10.1109/ROMAN.2002.1045677","DOIUrl":"https://doi.org/10.1109/ROMAN.2002.1045677","url":null,"abstract":"In this paper we describe a technique to determine the gaze direction of a human operator. Beginning with a head extraction, done by a color segmentation, the eye position gets estimated by a corner detector, taking into account, that the eyes must lie on approximately the same eye level. The iris position is found in two steps. First, a region growing algorithm marks the darkest area for each eye, and a template matching determines the precise position in this pre-marked region. Second, a hough transformation builds up a second hypothesis on the iris position, based on the results of a canny edge detector. By fusing these two results, wrong determinations are excluded. Furthermore, the eye corners had to be found to evaluate their relative position to the iris. This is done by a parametric eye model, finding the best correspondence of the model and the image features. Now, having the iris and the eye corner positions, a gaze direction can be determined. The heading direction is considered to be known.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131799379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信