基于深度强化学习的仿人机器人平衡控制

E. Kouchaki, M. Palhang
{"title":"基于深度强化学习的仿人机器人平衡控制","authors":"E. Kouchaki, M. Palhang","doi":"10.1109/CSICC58665.2023.10105418","DOIUrl":null,"url":null,"abstract":"In this paper, a deep reinforcement learning algorithm is presented to control a humanoid robot. We have used two control levels in a hierarchical manner. Within the high-level control architecture, a policy is determined by a combination of two neural networks as actor and critic and optimized using proximal policy optimization (PPO) method. The output policy specifies reference angles for robot joint space. At the low-level control, a PID controller regulates robot states around the reference values. The robot model is provided in MuJoCo physics engine and simulations are performed using mujoco-py library. During the simulations robot could maintain its balance stability against wide variety of exerted disturbances. The results showed that the proposed algorithm had a good performance and could resist larger push impacts compared to the pure PID controller.","PeriodicalId":127277,"journal":{"name":"2023 28th International Computer Conference, Computer Society of Iran (CSICC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Balance Control of a Humanoid Robot Using DeepReinforcement Learning\",\"authors\":\"E. Kouchaki, M. Palhang\",\"doi\":\"10.1109/CSICC58665.2023.10105418\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a deep reinforcement learning algorithm is presented to control a humanoid robot. We have used two control levels in a hierarchical manner. Within the high-level control architecture, a policy is determined by a combination of two neural networks as actor and critic and optimized using proximal policy optimization (PPO) method. The output policy specifies reference angles for robot joint space. At the low-level control, a PID controller regulates robot states around the reference values. The robot model is provided in MuJoCo physics engine and simulations are performed using mujoco-py library. During the simulations robot could maintain its balance stability against wide variety of exerted disturbances. The results showed that the proposed algorithm had a good performance and could resist larger push impacts compared to the pure PID controller.\",\"PeriodicalId\":127277,\"journal\":{\"name\":\"2023 28th International Computer Conference, Computer Society of Iran (CSICC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 28th International Computer Conference, Computer Society of Iran (CSICC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSICC58665.2023.10105418\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 28th International Computer Conference, Computer Society of Iran (CSICC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSICC58665.2023.10105418","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文提出了一种用于控制仿人机器人的深度强化学习算法。我们以分层的方式使用了两个控制级别。在高级控制体系结构中,策略由作为参与者和批评者的两个神经网络组合确定,并使用近端策略优化(PPO)方法进行优化。输出策略指定机器人关节空间的参考角度。在底层控制中,PID控制器围绕参考值调节机器人的状态。在MuJoCo物理引擎中提供机器人模型,并使用MuJoCo -py库进行仿真。仿真过程中,机器人能够在各种干扰下保持平衡稳定性。结果表明,与纯PID控制器相比,该算法具有良好的性能,可以抵抗更大的推冲击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Balance Control of a Humanoid Robot Using DeepReinforcement Learning
In this paper, a deep reinforcement learning algorithm is presented to control a humanoid robot. We have used two control levels in a hierarchical manner. Within the high-level control architecture, a policy is determined by a combination of two neural networks as actor and critic and optimized using proximal policy optimization (PPO) method. The output policy specifies reference angles for robot joint space. At the low-level control, a PID controller regulates robot states around the reference values. The robot model is provided in MuJoCo physics engine and simulations are performed using mujoco-py library. During the simulations robot could maintain its balance stability against wide variety of exerted disturbances. The results showed that the proposed algorithm had a good performance and could resist larger push impacts compared to the pure PID controller.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信