{"title":"DynGraspVS: Servoing Aided Grasping for Dynamic Environments","authors":"Gunjan Gupta, Vedansh Mittal, K. M. Krishna","doi":"10.1109/ROBIO58561.2023.10354813","DOIUrl":null,"url":null,"abstract":"Visual servoing has been gaining popularity in various real-world vision-centric robotic applications. Autonomous robotic grasping often deals with unseen and unstructured environments, and in this task, Visual Servoing has been able to generate improved end-effector control by providing visual feedback. However, existing Servoing-aided grasping methods tend to fail at the task of grasping in dynamic environments i.e. - moving objects.In this paper, we introduce DynGraspVS, a novel Image-based Visual Servoing-aided Grasping approach that models the motion of moving objects in its interaction matrix. Leveraging a single-step rollout strategy, our approach achieves a remarkable increase in success rate, while converging faster and achieving a smoother trajectory, while maintaining precise alignments in six degrees of freedom. By integrating the velocity information into the interaction matrix, our method is able to successfully complete the challenging task of robotic grasping in the case of dynamic objects, while outperforming existing deep Model Predictive Control (MPC) based methods in the PyBullet simulation environment. We test it with a range of objects in the YCB dataset with varying range of shapes, sizes, and material properties. We report various evaluation metrics such as photometric error, success rate, time taken, and trajectory length.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"37 2","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBIO58561.2023.10354813","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Visual servoing has been gaining popularity in various real-world vision-centric robotic applications. Autonomous robotic grasping often deals with unseen and unstructured environments, and in this task, Visual Servoing has been able to generate improved end-effector control by providing visual feedback. However, existing Servoing-aided grasping methods tend to fail at the task of grasping in dynamic environments i.e. - moving objects.In this paper, we introduce DynGraspVS, a novel Image-based Visual Servoing-aided Grasping approach that models the motion of moving objects in its interaction matrix. Leveraging a single-step rollout strategy, our approach achieves a remarkable increase in success rate, while converging faster and achieving a smoother trajectory, while maintaining precise alignments in six degrees of freedom. By integrating the velocity information into the interaction matrix, our method is able to successfully complete the challenging task of robotic grasping in the case of dynamic objects, while outperforming existing deep Model Predictive Control (MPC) based methods in the PyBullet simulation environment. We test it with a range of objects in the YCB dataset with varying range of shapes, sizes, and material properties. We report various evaluation metrics such as photometric error, success rate, time taken, and trajectory length.