Deep Reinforcement Learning Enhanced Convolutional Neural Networks for Robotic Grasping

Jianhao Fang, Weifei Hu, Chuxuan Wang, Zhen-yu Liu, Jianrong Tan
{"title":"Deep Reinforcement Learning Enhanced Convolutional Neural Networks for Robotic Grasping","authors":"Jianhao Fang, Weifei Hu, Chuxuan Wang, Zhen-yu Liu, Jianrong Tan","doi":"10.1115/detc2021-67225","DOIUrl":null,"url":null,"abstract":"\n Robotic grasping is an important task for various industrial applications. However, combining detecting and grasping to perform a dynamic and efficient object moving is still a challenge for robotic grasping. Meanwhile, it is time consuming for robotic algorithm training and testing in realistic. Here we present a framework for dynamic robotic grasping based on deep Q-network (DQN) in a virtual grasping space. The proposed dynamic robotic grasping framework mainly consists of the DQN, the convolutional neural network (CNN), and the virtual model of robotic grasping. After observing the result generated by applying the generative grasping convolutional neural network (GG-CNN), a robotic manipulation conducts actions according to Q-network. Different actions generate different rewards, which are implemented to update the neural network through loss function. The goal of this method is to find a reasonable strategy to optimize the total reward and finally accomplish a dynamic grasping process. In the test of virtual space, we achieve an 85.5% grasp success rate on a set of previously unseen objects, which demonstrates the accuracy of DQN enhanced GG-CNN model. The experimental results show that the DQN can efficiently enhance the GG-CNN by considering the grasping procedure (i.e. the grasping time and the gripper’s posture), which makes the grasping procedure stable and increases the success rate of robotic grasping.","PeriodicalId":299235,"journal":{"name":"Volume 3B: 47th Design Automation Conference (DAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 3B: 47th Design Automation Conference (DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2021-67225","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Robotic grasping is an important task for various industrial applications. However, combining detecting and grasping to perform a dynamic and efficient object moving is still a challenge for robotic grasping. Meanwhile, it is time consuming for robotic algorithm training and testing in realistic. Here we present a framework for dynamic robotic grasping based on deep Q-network (DQN) in a virtual grasping space. The proposed dynamic robotic grasping framework mainly consists of the DQN, the convolutional neural network (CNN), and the virtual model of robotic grasping. After observing the result generated by applying the generative grasping convolutional neural network (GG-CNN), a robotic manipulation conducts actions according to Q-network. Different actions generate different rewards, which are implemented to update the neural network through loss function. The goal of this method is to find a reasonable strategy to optimize the total reward and finally accomplish a dynamic grasping process. In the test of virtual space, we achieve an 85.5% grasp success rate on a set of previously unseen objects, which demonstrates the accuracy of DQN enhanced GG-CNN model. The experimental results show that the DQN can efficiently enhance the GG-CNN by considering the grasping procedure (i.e. the grasping time and the gripper’s posture), which makes the grasping procedure stable and increases the success rate of robotic grasping.
深度强化学习增强卷积神经网络用于机器人抓取
机器人抓取是各种工业应用中的一项重要任务。然而,如何将检测和抓取结合起来,实现动态、高效的物体移动,仍然是机器人抓取技术面临的一个挑战。同时,机器人算法在现实中的训练和测试也非常耗时。本文提出了一种基于深度q网络(DQN)的虚拟机器人动态抓取框架。提出的机器人动态抓取框架主要由DQN、卷积神经网络(CNN)和机器人抓取虚拟模型组成。机器人在观察应用生成式抓取卷积神经网络(GG-CNN)产生的结果后,根据Q-network进行动作。不同的动作产生不同的奖励,通过损失函数实现对神经网络的更新。该方法的目标是找到一个合理的策略来优化总奖励,最终完成一个动态抓取过程。在虚拟空间的测试中,我们在一组以前未见过的物体上实现了85.5%的抓取成功率,证明了DQN增强的GG-CNN模型的准确性。实验结果表明,考虑抓取过程(即抓取时间和抓取器姿态),DQN可以有效增强GG-CNN,使抓取过程稳定,提高机器人抓取成功率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信