{"title":"视觉辅助假手抓取动作的目标预测与时间定位","authors":"Xu Shi, Wei Xu, Weichao Guo, X. Sheng","doi":"10.1109/ROBIO55434.2022.10011751","DOIUrl":null,"url":null,"abstract":"With the development of shared control technology for humanoid prosthetic hands, more and more research is focused on vision-based machine decision making. In this paper, we propose a miniaturized eye-in-hand target object prediction and action decision-making framework for the humanoid hand “approach-grasp” sequence. Our prediction system can simultaneously predict the target object and detect temporal localization of the grasp action. The system is divided into three main modules: feature logging, target filtering and grasp triggering. In this paper, the optimal configuration of the hyper-parameters designed in each module is performed experimentally. We also propose a prediction quality assessment method for “approach-grasp” behavior, including instance level, sequence level and action decision level. With the optimal hyper-parameter configuration, the predicting system perform averagely to 0.854 at instance prediction accuracy (IP), 0.643 at grasp action prediction accuracy (GP). It also has good predictive stability for most classes of objects with number of predicting changes (NPC) below 6.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Target prediction and temporal localization of grasping action for vision-assisted prosthetic hand\",\"authors\":\"Xu Shi, Wei Xu, Weichao Guo, X. Sheng\",\"doi\":\"10.1109/ROBIO55434.2022.10011751\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the development of shared control technology for humanoid prosthetic hands, more and more research is focused on vision-based machine decision making. In this paper, we propose a miniaturized eye-in-hand target object prediction and action decision-making framework for the humanoid hand “approach-grasp” sequence. Our prediction system can simultaneously predict the target object and detect temporal localization of the grasp action. The system is divided into three main modules: feature logging, target filtering and grasp triggering. In this paper, the optimal configuration of the hyper-parameters designed in each module is performed experimentally. We also propose a prediction quality assessment method for “approach-grasp” behavior, including instance level, sequence level and action decision level. With the optimal hyper-parameter configuration, the predicting system perform averagely to 0.854 at instance prediction accuracy (IP), 0.643 at grasp action prediction accuracy (GP). It also has good predictive stability for most classes of objects with number of predicting changes (NPC) below 6.\",\"PeriodicalId\":151112,\"journal\":{\"name\":\"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROBIO55434.2022.10011751\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBIO55434.2022.10011751","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Target prediction and temporal localization of grasping action for vision-assisted prosthetic hand
With the development of shared control technology for humanoid prosthetic hands, more and more research is focused on vision-based machine decision making. In this paper, we propose a miniaturized eye-in-hand target object prediction and action decision-making framework for the humanoid hand “approach-grasp” sequence. Our prediction system can simultaneously predict the target object and detect temporal localization of the grasp action. The system is divided into three main modules: feature logging, target filtering and grasp triggering. In this paper, the optimal configuration of the hyper-parameters designed in each module is performed experimentally. We also propose a prediction quality assessment method for “approach-grasp” behavior, including instance level, sequence level and action decision level. With the optimal hyper-parameter configuration, the predicting system perform averagely to 0.854 at instance prediction accuracy (IP), 0.643 at grasp action prediction accuracy (GP). It also has good predictive stability for most classes of objects with number of predicting changes (NPC) below 6.