DynGraspVS:为动态环境提供伺服辅助抓取功能

Gunjan Gupta, Vedansh Mittal, K. M. Krishna
{"title":"DynGraspVS:为动态环境提供伺服辅助抓取功能","authors":"Gunjan Gupta, Vedansh Mittal, K. M. Krishna","doi":"10.1109/ROBIO58561.2023.10354813","DOIUrl":null,"url":null,"abstract":"Visual servoing has been gaining popularity in various real-world vision-centric robotic applications. Autonomous robotic grasping often deals with unseen and unstructured environments, and in this task, Visual Servoing has been able to generate improved end-effector control by providing visual feedback. However, existing Servoing-aided grasping methods tend to fail at the task of grasping in dynamic environments i.e. - moving objects.In this paper, we introduce DynGraspVS, a novel Image-based Visual Servoing-aided Grasping approach that models the motion of moving objects in its interaction matrix. Leveraging a single-step rollout strategy, our approach achieves a remarkable increase in success rate, while converging faster and achieving a smoother trajectory, while maintaining precise alignments in six degrees of freedom. By integrating the velocity information into the interaction matrix, our method is able to successfully complete the challenging task of robotic grasping in the case of dynamic objects, while outperforming existing deep Model Predictive Control (MPC) based methods in the PyBullet simulation environment. We test it with a range of objects in the YCB dataset with varying range of shapes, sizes, and material properties. We report various evaluation metrics such as photometric error, success rate, time taken, and trajectory length.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"37 2","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DynGraspVS: Servoing Aided Grasping for Dynamic Environments\",\"authors\":\"Gunjan Gupta, Vedansh Mittal, K. M. Krishna\",\"doi\":\"10.1109/ROBIO58561.2023.10354813\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual servoing has been gaining popularity in various real-world vision-centric robotic applications. Autonomous robotic grasping often deals with unseen and unstructured environments, and in this task, Visual Servoing has been able to generate improved end-effector control by providing visual feedback. However, existing Servoing-aided grasping methods tend to fail at the task of grasping in dynamic environments i.e. - moving objects.In this paper, we introduce DynGraspVS, a novel Image-based Visual Servoing-aided Grasping approach that models the motion of moving objects in its interaction matrix. Leveraging a single-step rollout strategy, our approach achieves a remarkable increase in success rate, while converging faster and achieving a smoother trajectory, while maintaining precise alignments in six degrees of freedom. By integrating the velocity information into the interaction matrix, our method is able to successfully complete the challenging task of robotic grasping in the case of dynamic objects, while outperforming existing deep Model Predictive Control (MPC) based methods in the PyBullet simulation environment. We test it with a range of objects in the YCB dataset with varying range of shapes, sizes, and material properties. We report various evaluation metrics such as photometric error, success rate, time taken, and trajectory length.\",\"PeriodicalId\":505134,\"journal\":{\"name\":\"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"volume\":\"37 2\",\"pages\":\"1-8\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROBIO58561.2023.10354813\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBIO58561.2023.10354813","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

视觉伺服在现实世界中各种以视觉为中心的机器人应用中越来越受欢迎。自主机器人抓取通常要面对看不见的非结构化环境,而在这项任务中,视觉伺服技术能够通过提供视觉反馈来改进末端执行器的控制。然而,现有的伺服辅助抓取方法往往无法在动态环境(即移动物体)中完成抓取任务。在本文中,我们介绍了基于图像的新型视觉伺服辅助抓取方法 DynGraspVS,它在交互矩阵中对移动物体的运动进行建模。利用单步推出策略,我们的方法显著提高了成功率,同时收敛速度更快,轨迹更平滑,并保持六个自由度的精确对准。通过将速度信息整合到交互矩阵中,我们的方法能够在动态物体的情况下成功完成机器人抓取这一具有挑战性的任务,同时在 PyBullet 仿真环境中优于现有的基于深度模型预测控制(MPC)的方法。我们用 YCB 数据集中一系列形状、大小和材料属性各不相同的物体对其进行了测试。我们报告了各种评估指标,如光度误差、成功率、耗时和轨迹长度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DynGraspVS: Servoing Aided Grasping for Dynamic Environments
Visual servoing has been gaining popularity in various real-world vision-centric robotic applications. Autonomous robotic grasping often deals with unseen and unstructured environments, and in this task, Visual Servoing has been able to generate improved end-effector control by providing visual feedback. However, existing Servoing-aided grasping methods tend to fail at the task of grasping in dynamic environments i.e. - moving objects.In this paper, we introduce DynGraspVS, a novel Image-based Visual Servoing-aided Grasping approach that models the motion of moving objects in its interaction matrix. Leveraging a single-step rollout strategy, our approach achieves a remarkable increase in success rate, while converging faster and achieving a smoother trajectory, while maintaining precise alignments in six degrees of freedom. By integrating the velocity information into the interaction matrix, our method is able to successfully complete the challenging task of robotic grasping in the case of dynamic objects, while outperforming existing deep Model Predictive Control (MPC) based methods in the PyBullet simulation environment. We test it with a range of objects in the YCB dataset with varying range of shapes, sizes, and material properties. We report various evaluation metrics such as photometric error, success rate, time taken, and trajectory length.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信