Layered modular action control for communicative humanoids

K. Thórisson
{"title":"Layered modular action control for communicative humanoids","authors":"K. Thórisson","doi":"10.1109/CA.1997.601055","DOIUrl":null,"url":null,"abstract":"Face-to-face interaction between people is generally effortless and effective. We exchange glances, take turns speaking and make facial and manual gestures to achieve the goals of the dialogue. This paper describes an action composition and selection architecture for synthetic characters capable of full-duplex, real-time face-to-face interaction with a human. This architecture is part of a computational model of psychosocial dialogue skills, called Y_m_i_r_, that bridges between multimodal perception and multimodal action generation. To test the architecture, a prototype humanoid has been implemented, named G_a_n_d_a_ l_f_, who commands a graphical model of the solar system and can engage in task-directed dialogue with people using speech, manual and facial gesture. Gandalf has been tested in interaction with users and has been shown capable of fluid turn-taking and multimodal dialogue. The primary focus in this paper will be on the action selection mechanisms and low-level composition of motor commands. An overview is also given of the Ymir model and Gandalf's graphical representation.","PeriodicalId":155755,"journal":{"name":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1997-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"56","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Computer Animation '97 (Cat. No.97TB100120)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CA.1997.601055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 56

Abstract

Face-to-face interaction between people is generally effortless and effective. We exchange glances, take turns speaking and make facial and manual gestures to achieve the goals of the dialogue. This paper describes an action composition and selection architecture for synthetic characters capable of full-duplex, real-time face-to-face interaction with a human. This architecture is part of a computational model of psychosocial dialogue skills, called Y_m_i_r_, that bridges between multimodal perception and multimodal action generation. To test the architecture, a prototype humanoid has been implemented, named G_a_n_d_a_ l_f_, who commands a graphical model of the solar system and can engage in task-directed dialogue with people using speech, manual and facial gesture. Gandalf has been tested in interaction with users and has been shown capable of fluid turn-taking and multimodal dialogue. The primary focus in this paper will be on the action selection mechanisms and low-level composition of motor commands. An overview is also given of the Ymir model and Gandalf's graphical representation.
交流类人机器人的分层模块化动作控制
人与人之间面对面的交流通常是毫不费力和有效的。我们交换眼神,轮流说话,做出面部和手势来实现对话的目标。本文描述了一种能够与人进行全双工实时面对面交互的合成角色的动作组成和选择体系结构。这个结构是一个名为Y_m_i_r_的社会心理对话技能计算模型的一部分,它是多模态感知和多模态行动生成之间的桥梁。为了测试这一架构,一个名为G_a_n_d_a_ l_f_的人形原型已经实现,它可以指挥太阳系的图形模型,并可以通过语音、手势和面部手势与人进行任务导向的对话。甘道夫已经在与用户的交互中进行了测试,并已被证明能够流畅地轮换和多模式对话。本文将主要关注动作选择机制和运动命令的低级组成。概述了伊米尔模型和甘道夫的图形表示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信