Impacts of Teaching towards Training Gesture Recognizers for Human-Robot Interaction

Jianxun Tan, Wesley P. Chan, Nicole L. Robinson, D. Kulić, E. Croft
{"title":"Impacts of Teaching towards Training Gesture Recognizers for Human-Robot Interaction","authors":"Jianxun Tan, Wesley P. Chan, Nicole L. Robinson, D. Kulić, E. Croft","doi":"10.1109/RO-MAN53752.2022.9900774","DOIUrl":null,"url":null,"abstract":"The use of hand-based gestures has been proposed as an intuitive way for people to communicate with robots. Typically the set of gestures is defined by the experimenter. However, existing works do not necessarily focus on gestures that are communicative, and it is unclear whether the selected gesture are actually intuitive to users. This paper investigates whether different people inherently use similar gestures to convey the same commands to robots, and how teaching of gestures when collecting demonstrations for training recognizers can improve resulting accuracy. We conducted this work in two stages. In Stage 1, we conducted an online user study (n=190) to investigate if people use similar gestures to communicate the same set of given commands to a robot when no guidance or training was given. Results revealed large variations in the gestures used among individuals With the absences of training. Training a gesture recognizer using this dataset resulted in an accuracy of around 20%. In response to this, Stage 2 involved proposing a common set of gestures for the commands. We taught these gestures through demonstrations and collected ~ 7500 videos of gestures from study participants to train another gesture recognition model. Initial results showed improved accuracy but a number of gestures had high confusion rates. Refining our gesture set and recognition model by removing those gestures, We achieved an final accuracy of 84.1 ± 2.4%. We integrated the gesture recognition model into the ROS framework and demonstrated a use case, where a person commands a robot to perform a pick and place task using the gesture set.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN53752.2022.9900774","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The use of hand-based gestures has been proposed as an intuitive way for people to communicate with robots. Typically the set of gestures is defined by the experimenter. However, existing works do not necessarily focus on gestures that are communicative, and it is unclear whether the selected gesture are actually intuitive to users. This paper investigates whether different people inherently use similar gestures to convey the same commands to robots, and how teaching of gestures when collecting demonstrations for training recognizers can improve resulting accuracy. We conducted this work in two stages. In Stage 1, we conducted an online user study (n=190) to investigate if people use similar gestures to communicate the same set of given commands to a robot when no guidance or training was given. Results revealed large variations in the gestures used among individuals With the absences of training. Training a gesture recognizer using this dataset resulted in an accuracy of around 20%. In response to this, Stage 2 involved proposing a common set of gestures for the commands. We taught these gestures through demonstrations and collected ~ 7500 videos of gestures from study participants to train another gesture recognition model. Initial results showed improved accuracy but a number of gestures had high confusion rates. Refining our gesture set and recognition model by removing those gestures, We achieved an final accuracy of 84.1 ± 2.4%. We integrated the gesture recognition model into the ROS framework and demonstrated a use case, where a person commands a robot to perform a pick and place task using the gesture set.
教学对训练人机交互手势识别器的影响
人们提出使用手势作为人类与机器人交流的一种直观方式。通常,这组手势是由实验者定义的。然而,现有的工作并不一定关注具有交流功能的手势,所选择的手势对用户来说是否真的直观,目前还不清楚。本文研究了不同的人是否天生使用相似的手势向机器人传达相同的命令,以及在为训练识别器收集演示时如何教授手势可以提高结果的准确性。我们分两个阶段进行这项工作。在第一阶段,我们进行了一项在线用户研究(n=190),以调查在没有指导或训练的情况下,人们是否会使用类似的手势向机器人传达相同的给定命令。结果显示,在缺乏训练的情况下,个体使用的手势差异很大。使用该数据集训练手势识别器的准确率约为20%。作为回应,第二阶段涉及为命令提出一组通用手势。我们通过演示来教授这些手势,并从研究参与者那里收集了约7500个手势视频来训练另一个手势识别模型。最初的结果显示准确率有所提高,但一些手势的混淆率很高。通过去除这些手势,改进我们的手势集和识别模型,我们获得了84.1±2.4%的最终准确率。我们将手势识别模型集成到ROS框架中,并演示了一个用例,其中一个人命令机器人使用手势集执行拾取和放置任务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信