基于稀疏奖励的高速示范学习机械臂运动规划

Guoyu Zuo, Jiahao Lu, Tingting Pan
{"title":"基于稀疏奖励的高速示范学习机械臂运动规划","authors":"Guoyu Zuo, Jiahao Lu, Tingting Pan","doi":"10.1109/ROBIO.2018.8665328","DOIUrl":null,"url":null,"abstract":"This paper proposed a high speed learning from demonstrations (LfD) method for sparse reward based motion planning problem of manipulator by using hindsight experience replay (HER) mechanism and deep deterministic policy gradient (DDPG) method. First, a demonstrations replay buffer and an agent exploration replay buffer are created for storing experience data, and the hindsight experience replay mechanism is subsequently used to acquire the experience data from the two replay buffers. Then, the deep deterministic policy gradient method is used to learn the experience data and finally fulfil the manipulator motion planning tasks under the sparse reward. Last, experiments on the pushing and pick-and-place tasks were conducted in the robotics environment in the gym. Results show that the training speed is increased to at least 10 times as compared to the deep deterministic policy gradient method without demonstrations data. In addition, the proposed method can effectively utilize the sparse reward, and the agent can quickly complete the task even under the low success rate of demonstrations data.","PeriodicalId":417415,"journal":{"name":"2018 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"235 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Sparse Reward Based Manipulator Motion Planning by Using High Speed Learning from Demonstrations\",\"authors\":\"Guoyu Zuo, Jiahao Lu, Tingting Pan\",\"doi\":\"10.1109/ROBIO.2018.8665328\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposed a high speed learning from demonstrations (LfD) method for sparse reward based motion planning problem of manipulator by using hindsight experience replay (HER) mechanism and deep deterministic policy gradient (DDPG) method. First, a demonstrations replay buffer and an agent exploration replay buffer are created for storing experience data, and the hindsight experience replay mechanism is subsequently used to acquire the experience data from the two replay buffers. Then, the deep deterministic policy gradient method is used to learn the experience data and finally fulfil the manipulator motion planning tasks under the sparse reward. Last, experiments on the pushing and pick-and-place tasks were conducted in the robotics environment in the gym. Results show that the training speed is increased to at least 10 times as compared to the deep deterministic policy gradient method without demonstrations data. In addition, the proposed method can effectively utilize the sparse reward, and the agent can quickly complete the task even under the low success rate of demonstrations data.\",\"PeriodicalId\":417415,\"journal\":{\"name\":\"2018 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"volume\":\"235 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE International Conference on Robotics and Biomimetics (ROBIO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROBIO.2018.8665328\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Robotics and Biomimetics (ROBIO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBIO.2018.8665328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

利用后见之明经验重放(HER)机制和深度确定性策略梯度(DDPG)方法,提出了一种基于稀疏奖励的机械臂运动规划问题的快速演示学习(LfD)方法。首先,创建演示重播缓冲区和智能体探索重播缓冲区用于存储经验数据,然后使用后见之明的经验重播机制从两个重播缓冲区获取经验数据。然后,采用深度确定性策略梯度法学习经验数据,最终完成稀疏奖励下的机械手运动规划任务。最后,在体育馆机器人环境下对推入和捡放任务进行了实验。实验结果表明,与没有演示数据的深度确定性策略梯度方法相比,该方法的训练速度提高了至少10倍。此外,该方法可以有效地利用稀疏奖励,即使在演示数据成功率较低的情况下,智能体也能快速完成任务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Sparse Reward Based Manipulator Motion Planning by Using High Speed Learning from Demonstrations
This paper proposed a high speed learning from demonstrations (LfD) method for sparse reward based motion planning problem of manipulator by using hindsight experience replay (HER) mechanism and deep deterministic policy gradient (DDPG) method. First, a demonstrations replay buffer and an agent exploration replay buffer are created for storing experience data, and the hindsight experience replay mechanism is subsequently used to acquire the experience data from the two replay buffers. Then, the deep deterministic policy gradient method is used to learn the experience data and finally fulfil the manipulator motion planning tasks under the sparse reward. Last, experiments on the pushing and pick-and-place tasks were conducted in the robotics environment in the gym. Results show that the training speed is increased to at least 10 times as compared to the deep deterministic policy gradient method without demonstrations data. In addition, the proposed method can effectively utilize the sparse reward, and the agent can quickly complete the task even under the low success rate of demonstrations data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信