Sai Krishna Allani, Brendan David-John, Javier Ruiz, Saurabh Dixit, Jackson Carter, C. Grimm, Ravi Balasubramanian
{"title":"在抓取任务中评估人类的凝视模式:机器人与人的手","authors":"Sai Krishna Allani, Brendan David-John, Javier Ruiz, Saurabh Dixit, Jackson Carter, C. Grimm, Ravi Balasubramanian","doi":"10.1145/2931002.2931007","DOIUrl":null,"url":null,"abstract":"Perception and gaze are an integral part of determining where and how to grasp an object. In this study we analyze how gaze patterns differ when participants are asked to manipulate a robotic hand to perform a grasping task when compared with using their own. We have three findings. First, while gaze patterns for the object are similar in both conditions, participants spent substantially more time gazing at the robotic hand then their own, particularly the wrist and finger positions. Second, We provide evidence that for complex objects (eg, a toy airplane) participants essentially treated the object as a collection of sub-objects. Third, we performed a follow-up study that shows that choosing camera angles that clearly display the features participants spend time gazing at are more effective for determining the effectiveness of a grasp from images. Our findings are relevant both for automated algorithms (where visual cues are important for analyzing objects for potential grasps) and for designing tele-operation interfaces (how best to present the visual data to the remote operator).","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Evaluating human gaze patterns during grasping tasks: robot versus human hand\",\"authors\":\"Sai Krishna Allani, Brendan David-John, Javier Ruiz, Saurabh Dixit, Jackson Carter, C. Grimm, Ravi Balasubramanian\",\"doi\":\"10.1145/2931002.2931007\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Perception and gaze are an integral part of determining where and how to grasp an object. In this study we analyze how gaze patterns differ when participants are asked to manipulate a robotic hand to perform a grasping task when compared with using their own. We have three findings. First, while gaze patterns for the object are similar in both conditions, participants spent substantially more time gazing at the robotic hand then their own, particularly the wrist and finger positions. Second, We provide evidence that for complex objects (eg, a toy airplane) participants essentially treated the object as a collection of sub-objects. Third, we performed a follow-up study that shows that choosing camera angles that clearly display the features participants spend time gazing at are more effective for determining the effectiveness of a grasp from images. Our findings are relevant both for automated algorithms (where visual cues are important for analyzing objects for potential grasps) and for designing tele-operation interfaces (how best to present the visual data to the remote operator).\",\"PeriodicalId\":102213,\"journal\":{\"name\":\"Proceedings of the ACM Symposium on Applied Perception\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM Symposium on Applied Perception\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2931002.2931007\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Symposium on Applied Perception","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2931002.2931007","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating human gaze patterns during grasping tasks: robot versus human hand
Perception and gaze are an integral part of determining where and how to grasp an object. In this study we analyze how gaze patterns differ when participants are asked to manipulate a robotic hand to perform a grasping task when compared with using their own. We have three findings. First, while gaze patterns for the object are similar in both conditions, participants spent substantially more time gazing at the robotic hand then their own, particularly the wrist and finger positions. Second, We provide evidence that for complex objects (eg, a toy airplane) participants essentially treated the object as a collection of sub-objects. Third, we performed a follow-up study that shows that choosing camera angles that clearly display the features participants spend time gazing at are more effective for determining the effectiveness of a grasp from images. Our findings are relevant both for automated algorithms (where visual cues are important for analyzing objects for potential grasps) and for designing tele-operation interfaces (how best to present the visual data to the remote operator).