{"title":"眼球追踪与计算机视觉的结合,用于机器人控制","authors":"M. Leroux, M. Raison, T. Adadja, S. Achiche","doi":"10.1109/TePRA.2015.7219692","DOIUrl":null,"url":null,"abstract":"The manual control of manipulator robots can be complex and time consuming even for simple tasks, due to a number of degrees of freedom (DoF) of the robot that is higher than the number of simultaneous commands of the joystick. Among the emerging solutions, the eyetracking, which identifies the user gaze direction, is expected to automatically command some of the robot DoFs. However, the use of eyetracking in three dimensions (3D) still gives large and variable errors from several centimeters to several meters. The objective of this paper, is to combine eyetracking and computer vision to automate the approach of a robot to its targeted point by acquiring its 3D location. The methodology combines three steps : - A regular eyetracking device measures the user mean gaze direction. - The field of view of the user is recorded using a webcam, and the targeted point identified by image analysis. - The distance between the target and the user is computed using geometrical reconstruction, providing a 3D location point for the target. On 3 trials, the error analysis reveals that the computed coordinates of the target 3D localization has an average error of 5.5cm, which is 92% more accurate than using the eyetracking only for point of gaze calculation, with an estimated error of 72cm. Finally, we discuss an innovative way to complete the system with smart targets to overcome some of the current limitations of the proposed method.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Combination of eyetracking and computer vision for robotics control\",\"authors\":\"M. Leroux, M. Raison, T. Adadja, S. Achiche\",\"doi\":\"10.1109/TePRA.2015.7219692\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The manual control of manipulator robots can be complex and time consuming even for simple tasks, due to a number of degrees of freedom (DoF) of the robot that is higher than the number of simultaneous commands of the joystick. Among the emerging solutions, the eyetracking, which identifies the user gaze direction, is expected to automatically command some of the robot DoFs. However, the use of eyetracking in three dimensions (3D) still gives large and variable errors from several centimeters to several meters. The objective of this paper, is to combine eyetracking and computer vision to automate the approach of a robot to its targeted point by acquiring its 3D location. The methodology combines three steps : - A regular eyetracking device measures the user mean gaze direction. - The field of view of the user is recorded using a webcam, and the targeted point identified by image analysis. - The distance between the target and the user is computed using geometrical reconstruction, providing a 3D location point for the target. On 3 trials, the error analysis reveals that the computed coordinates of the target 3D localization has an average error of 5.5cm, which is 92% more accurate than using the eyetracking only for point of gaze calculation, with an estimated error of 72cm. Finally, we discuss an innovative way to complete the system with smart targets to overcome some of the current limitations of the proposed method.\",\"PeriodicalId\":325788,\"journal\":{\"name\":\"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-05-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TePRA.2015.7219692\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TePRA.2015.7219692","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Combination of eyetracking and computer vision for robotics control
The manual control of manipulator robots can be complex and time consuming even for simple tasks, due to a number of degrees of freedom (DoF) of the robot that is higher than the number of simultaneous commands of the joystick. Among the emerging solutions, the eyetracking, which identifies the user gaze direction, is expected to automatically command some of the robot DoFs. However, the use of eyetracking in three dimensions (3D) still gives large and variable errors from several centimeters to several meters. The objective of this paper, is to combine eyetracking and computer vision to automate the approach of a robot to its targeted point by acquiring its 3D location. The methodology combines three steps : - A regular eyetracking device measures the user mean gaze direction. - The field of view of the user is recorded using a webcam, and the targeted point identified by image analysis. - The distance between the target and the user is computed using geometrical reconstruction, providing a 3D location point for the target. On 3 trials, the error analysis reveals that the computed coordinates of the target 3D localization has an average error of 5.5cm, which is 92% more accurate than using the eyetracking only for point of gaze calculation, with an estimated error of 72cm. Finally, we discuss an innovative way to complete the system with smart targets to overcome some of the current limitations of the proposed method.