MOBOT人机通信模型

Stavroula-Evita Fotinea, E. Efthimiou, Maria Koutsombogera, Athanasia-Lida Dimou, Theodore Goulas, P. Maragos, C. Tzafestas
{"title":"MOBOT人机通信模型","authors":"Stavroula-Evita Fotinea, E. Efthimiou, Maria Koutsombogera, Athanasia-Lida Dimou, Theodore Goulas, P. Maragos, C. Tzafestas","doi":"10.1109/COGINFOCOM.2015.7390590","DOIUrl":null,"url":null,"abstract":"This paper reports on work related to the modelling of Human-Robot Communication on the basis of multimodal and multisensory human behaviour analysis. A primary focus in this framework of analysis is the definition of semantics of human actions, i.e. verbal and non-verbal signals, in a specific context with distinct Human-Robot interaction states. These states are captured and represented in terms of communicative behavioural patterns that influence, and in turn are adapted to the interaction flow with the goal to feed a multimodal human-robot communication system. This multimodal HRI model is defined upon, and ensures the usability of a multimodal sensory corpus acquired as a primary source of data retrieval, analysis and testing of mobility assistive robot prototypes.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"The MOBOT human-robot communication model\",\"authors\":\"Stavroula-Evita Fotinea, E. Efthimiou, Maria Koutsombogera, Athanasia-Lida Dimou, Theodore Goulas, P. Maragos, C. Tzafestas\",\"doi\":\"10.1109/COGINFOCOM.2015.7390590\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper reports on work related to the modelling of Human-Robot Communication on the basis of multimodal and multisensory human behaviour analysis. A primary focus in this framework of analysis is the definition of semantics of human actions, i.e. verbal and non-verbal signals, in a specific context with distinct Human-Robot interaction states. These states are captured and represented in terms of communicative behavioural patterns that influence, and in turn are adapted to the interaction flow with the goal to feed a multimodal human-robot communication system. This multimodal HRI model is defined upon, and ensures the usability of a multimodal sensory corpus acquired as a primary source of data retrieval, analysis and testing of mobility assistive robot prototypes.\",\"PeriodicalId\":377891,\"journal\":{\"name\":\"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)\",\"volume\":\"160 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/COGINFOCOM.2015.7390590\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COGINFOCOM.2015.7390590","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

本文报道了基于多模态和多感官人类行为分析的人机通信建模相关工作。这个分析框架的主要焦点是人类行为的语义定义,即在具有不同人机交互状态的特定上下文中的语言和非语言信号。这些状态被捕获并以影响的交流行为模式表示,并相应地适应交互流,目标是为多模态人机通信系统提供信息。这个多模态HRI模型被定义,并确保了作为数据检索、分析和测试移动辅助机器人原型的主要来源的多模态感官语料库的可用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The MOBOT human-robot communication model
This paper reports on work related to the modelling of Human-Robot Communication on the basis of multimodal and multisensory human behaviour analysis. A primary focus in this framework of analysis is the definition of semantics of human actions, i.e. verbal and non-verbal signals, in a specific context with distinct Human-Robot interaction states. These states are captured and represented in terms of communicative behavioural patterns that influence, and in turn are adapted to the interaction flow with the goal to feed a multimodal human-robot communication system. This multimodal HRI model is defined upon, and ensures the usability of a multimodal sensory corpus acquired as a primary source of data retrieval, analysis and testing of mobility assistive robot prototypes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信