眼球追踪与计算机视觉的结合,用于机器人控制

M. Leroux, M. Raison, T. Adadja, S. Achiche
{"title":"眼球追踪与计算机视觉的结合,用于机器人控制","authors":"M. Leroux, M. Raison, T. Adadja, S. Achiche","doi":"10.1109/TePRA.2015.7219692","DOIUrl":null,"url":null,"abstract":"The manual control of manipulator robots can be complex and time consuming even for simple tasks, due to a number of degrees of freedom (DoF) of the robot that is higher than the number of simultaneous commands of the joystick. Among the emerging solutions, the eyetracking, which identifies the user gaze direction, is expected to automatically command some of the robot DoFs. However, the use of eyetracking in three dimensions (3D) still gives large and variable errors from several centimeters to several meters. The objective of this paper, is to combine eyetracking and computer vision to automate the approach of a robot to its targeted point by acquiring its 3D location. The methodology combines three steps : - A regular eyetracking device measures the user mean gaze direction. - The field of view of the user is recorded using a webcam, and the targeted point identified by image analysis. - The distance between the target and the user is computed using geometrical reconstruction, providing a 3D location point for the target. On 3 trials, the error analysis reveals that the computed coordinates of the target 3D localization has an average error of 5.5cm, which is 92% more accurate than using the eyetracking only for point of gaze calculation, with an estimated error of 72cm. Finally, we discuss an innovative way to complete the system with smart targets to overcome some of the current limitations of the proposed method.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Combination of eyetracking and computer vision for robotics control\",\"authors\":\"M. Leroux, M. Raison, T. Adadja, S. Achiche\",\"doi\":\"10.1109/TePRA.2015.7219692\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The manual control of manipulator robots can be complex and time consuming even for simple tasks, due to a number of degrees of freedom (DoF) of the robot that is higher than the number of simultaneous commands of the joystick. Among the emerging solutions, the eyetracking, which identifies the user gaze direction, is expected to automatically command some of the robot DoFs. However, the use of eyetracking in three dimensions (3D) still gives large and variable errors from several centimeters to several meters. The objective of this paper, is to combine eyetracking and computer vision to automate the approach of a robot to its targeted point by acquiring its 3D location. The methodology combines three steps : - A regular eyetracking device measures the user mean gaze direction. - The field of view of the user is recorded using a webcam, and the targeted point identified by image analysis. - The distance between the target and the user is computed using geometrical reconstruction, providing a 3D location point for the target. On 3 trials, the error analysis reveals that the computed coordinates of the target 3D localization has an average error of 5.5cm, which is 92% more accurate than using the eyetracking only for point of gaze calculation, with an estimated error of 72cm. Finally, we discuss an innovative way to complete the system with smart targets to overcome some of the current limitations of the proposed method.\",\"PeriodicalId\":325788,\"journal\":{\"name\":\"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-05-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TePRA.2015.7219692\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TePRA.2015.7219692","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

摘要

机械臂机器人的手动控制即使是简单的任务也可能是复杂和耗时的,因为机器人的自由度(DoF)数量高于操纵杆的同时命令数量。在新兴的解决方案中,识别用户凝视方向的眼球追踪有望自动控制机器人的一些自由度。然而,在三维(3D)中使用眼动追踪仍然会产生很大的可变误差,从几厘米到几米不等。本文的目标是将眼动追踪和计算机视觉相结合,通过获取机器人的三维位置来实现机器人到达目标点的自动化。该方法包括三个步骤:常规的眼球追踪设备测量用户的平均凝视方向。-使用网络摄像头记录用户的视野,并通过图像分析识别目标点。-使用几何重建计算目标与用户之间的距离,为目标提供3D定位点。在3次试验中,误差分析表明,计算目标三维定位坐标的平均误差为5.5cm,比仅使用眼动追踪进行凝视点计算的精度提高了92%,估计误差为72cm。最后,我们讨论了一种具有智能目标的创新方法来完成系统,以克服目前所提出方法的一些局限性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Combination of eyetracking and computer vision for robotics control
The manual control of manipulator robots can be complex and time consuming even for simple tasks, due to a number of degrees of freedom (DoF) of the robot that is higher than the number of simultaneous commands of the joystick. Among the emerging solutions, the eyetracking, which identifies the user gaze direction, is expected to automatically command some of the robot DoFs. However, the use of eyetracking in three dimensions (3D) still gives large and variable errors from several centimeters to several meters. The objective of this paper, is to combine eyetracking and computer vision to automate the approach of a robot to its targeted point by acquiring its 3D location. The methodology combines three steps : - A regular eyetracking device measures the user mean gaze direction. - The field of view of the user is recorded using a webcam, and the targeted point identified by image analysis. - The distance between the target and the user is computed using geometrical reconstruction, providing a 3D location point for the target. On 3 trials, the error analysis reveals that the computed coordinates of the target 3D localization has an average error of 5.5cm, which is 92% more accurate than using the eyetracking only for point of gaze calculation, with an estimated error of 72cm. Finally, we discuss an innovative way to complete the system with smart targets to overcome some of the current limitations of the proposed method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信