柔性人机团队的相关感知模式

Nico Höllerich, D. Henrich
{"title":"柔性人机团队的相关感知模式","authors":"Nico Höllerich, D. Henrich","doi":"10.1109/RO-MAN47096.2020.9223593","DOIUrl":null,"url":null,"abstract":"Robust and reliable perception plays an important role when humans engage into cooperation with robots in industrial or household settings. Various explicit and implicit communication modalities and perception methods can be used to recognize expressed intentions. Depending on the modality, different sensors, areas of observation, and perception methods need to be utilized. More modalities increase the complexity and costs of the setup. We consider the scenario of a cooperative task in a potentially noisy environment, where verbal communication is hardly feasible. Our goal is to investigate the importance of different, non-verbal communication modalities for intention recognition. To this end, we build upon an established benchmark study for human cooperation and investigate which input modalities contribute most towards recognizing the expressed intention. To measure the detection rate, we conducted a second study. Participants had to predict actions based on a stream of symbolic input data. Findings confirm the existence of a common gesture dictionary and the importance of hand tracking for action prediction when the number of feasible actions increases. The contribution of this work is a usage ranking of gestures and a comparison of input modalities to improve prediction capabilities in human-robot cooperation.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Relevant Perception Modalities for Flexible Human-Robot Teams\",\"authors\":\"Nico Höllerich, D. Henrich\",\"doi\":\"10.1109/RO-MAN47096.2020.9223593\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Robust and reliable perception plays an important role when humans engage into cooperation with robots in industrial or household settings. Various explicit and implicit communication modalities and perception methods can be used to recognize expressed intentions. Depending on the modality, different sensors, areas of observation, and perception methods need to be utilized. More modalities increase the complexity and costs of the setup. We consider the scenario of a cooperative task in a potentially noisy environment, where verbal communication is hardly feasible. Our goal is to investigate the importance of different, non-verbal communication modalities for intention recognition. To this end, we build upon an established benchmark study for human cooperation and investigate which input modalities contribute most towards recognizing the expressed intention. To measure the detection rate, we conducted a second study. Participants had to predict actions based on a stream of symbolic input data. Findings confirm the existence of a common gesture dictionary and the importance of hand tracking for action prediction when the number of feasible actions increases. The contribution of this work is a usage ranking of gestures and a comparison of input modalities to improve prediction capabilities in human-robot cooperation.\",\"PeriodicalId\":383722,\"journal\":{\"name\":\"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RO-MAN47096.2020.9223593\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN47096.2020.9223593","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

当人类在工业或家庭环境中与机器人合作时,稳健可靠的感知起着重要作用。各种外显和内隐的沟通方式和感知方法可以用来识别表达的意图。根据不同的模式,需要使用不同的传感器、观察区域和感知方法。更多的模式增加了设置的复杂性和成本。我们考虑在一个潜在的嘈杂环境中进行合作任务的场景,在这种环境中语言交流几乎是不可行的。我们的目标是研究不同的非语言交流模式对意图识别的重要性。为此,我们建立在人类合作的既定基准研究基础上,并调查哪种输入方式最有助于识别表达的意图。为了测量检出率,我们进行了第二次研究。参与者必须根据符号输入数据流预测动作。研究结果证实了一个通用手势字典的存在,以及当可行动作数量增加时手部跟踪对动作预测的重要性。这项工作的贡献是手势的使用排名和输入模式的比较,以提高人机合作的预测能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Relevant Perception Modalities for Flexible Human-Robot Teams
Robust and reliable perception plays an important role when humans engage into cooperation with robots in industrial or household settings. Various explicit and implicit communication modalities and perception methods can be used to recognize expressed intentions. Depending on the modality, different sensors, areas of observation, and perception methods need to be utilized. More modalities increase the complexity and costs of the setup. We consider the scenario of a cooperative task in a potentially noisy environment, where verbal communication is hardly feasible. Our goal is to investigate the importance of different, non-verbal communication modalities for intention recognition. To this end, we build upon an established benchmark study for human cooperation and investigate which input modalities contribute most towards recognizing the expressed intention. To measure the detection rate, we conducted a second study. Participants had to predict actions based on a stream of symbolic input data. Findings confirm the existence of a common gesture dictionary and the importance of hand tracking for action prediction when the number of feasible actions increases. The contribution of this work is a usage ranking of gestures and a comparison of input modalities to improve prediction capabilities in human-robot cooperation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信