Learning Smooth and Omnidirectional Locomotion for Quadruped Robots

Jiaxi Wu, Chenan Wang, Dianmin Zhang, Shanlin Zhong, Boxing Wang, Hong Qiao
{"title":"Learning Smooth and Omnidirectional Locomotion for Quadruped Robots","authors":"Jiaxi Wu, Chenan Wang, Dianmin Zhang, Shanlin Zhong, Boxing Wang, Hong Qiao","doi":"10.1109/ICARM52023.2021.9536204","DOIUrl":null,"url":null,"abstract":"It often takes a lot of trial and error to get a quadruped robot to learn a proper and natural gait directly through reinforcement learning. Moreover, it requires plenty of attempts and clever reward settings to learn appropriate locomotion. However, the success rate of network convergence is still relatively low. In this paper, the referred trajectory, inverse kinematics, and transformation loss are integrated into the training process of reinforcement learning as prior knowledge. Therefore reinforcement learning only needs to search for the optimal solution around the referred trajectory, making it easier to find the appropriate locomotion and guarantee convergence. When testing, a PD controller is fused into the trained model to reduce the velocity following error. Based on the above ideas, we propose two control framework - single closed-loop and double closed-loop. And their effectiveness is proved through experiments. It can efficiently help quadruped robots learn appropriate gait and realize smooth and omnidirectional locomotion, which all learned in one model.","PeriodicalId":367307,"journal":{"name":"2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARM52023.2021.9536204","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

It often takes a lot of trial and error to get a quadruped robot to learn a proper and natural gait directly through reinforcement learning. Moreover, it requires plenty of attempts and clever reward settings to learn appropriate locomotion. However, the success rate of network convergence is still relatively low. In this paper, the referred trajectory, inverse kinematics, and transformation loss are integrated into the training process of reinforcement learning as prior knowledge. Therefore reinforcement learning only needs to search for the optimal solution around the referred trajectory, making it easier to find the appropriate locomotion and guarantee convergence. When testing, a PD controller is fused into the trained model to reduce the velocity following error. Based on the above ideas, we propose two control framework - single closed-loop and double closed-loop. And their effectiveness is proved through experiments. It can efficiently help quadruped robots learn appropriate gait and realize smooth and omnidirectional locomotion, which all learned in one model.
四足机器人平稳全向运动的学习
通过强化学习,让四足机器人直接学会正确而自然的步态,往往需要大量的试验和错误。此外,它需要大量的尝试和聪明的奖励设置来学习适当的运动。但是,网络融合的成功率仍然比较低。本文将所涉及的轨迹、逆运动学和变换损失作为先验知识整合到强化学习的训练过程中。因此,强化学习只需要围绕参考轨迹寻找最优解,更容易找到合适的运动并保证收敛。在测试时,将PD控制器融合到训练模型中,以减小速度跟随误差。基于上述思想,我们提出了单闭环和双闭环两种控制框架。并通过实验验证了其有效性。它可以有效地帮助四足机器人学习合适的步态,实现平稳、全方位的运动,这些都是在一个模型中学习的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信