{"title":"基于随机连续深度强化学习的两足步行机器人轨迹控制","authors":"Atikah Surriani, Oyas Wahyunggoro, None Adha Imam Cahyadi","doi":"10.5109/7151701","DOIUrl":null,"url":null,"abstract":": The bipedal walking robot is an advanced anthropomorphic robot that can mimic the human ability to walk. Controlling the bipedal walking robot is difficult due to its nonlinearity and complexity. To solve this problem, recent studies have applied various machine learning algorithms based on reinforcement learning approaches, however most of them rely on deterministic-policy-based strategy. This research proposes Soft Actor Critic (SAC), which has stochastic policy strategy for controlling the bipedal walking robot. The option thought deterministic and stochastic policy affects the exploration of DRL algorithm. The SAC is a Deep Reinforcement Learning (DRL) based algorithm whose improvement obtained through the augmented entropy-based expected return allows the SAC algorithm to learn faster, gain exploration ability, and still ensure convergence. The SAC algorithm’s performance is validated with a bipedal robot to walk towards the straight-line trajectory. The number of the reward and the cumulative reward during the training is used as the algorithm's performance evaluation. The SAC algorithm controls the bipedal walking robot well with a total reward of 384,752.8.","PeriodicalId":12085,"journal":{"name":"Evergreen","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Trajectory Control for Bipedal Walking Robot Using Stochastic-Based Continuous Deep Reinforcement Learning\",\"authors\":\"Atikah Surriani, Oyas Wahyunggoro, None Adha Imam Cahyadi\",\"doi\":\"10.5109/7151701\",\"DOIUrl\":null,\"url\":null,\"abstract\":\": The bipedal walking robot is an advanced anthropomorphic robot that can mimic the human ability to walk. Controlling the bipedal walking robot is difficult due to its nonlinearity and complexity. To solve this problem, recent studies have applied various machine learning algorithms based on reinforcement learning approaches, however most of them rely on deterministic-policy-based strategy. This research proposes Soft Actor Critic (SAC), which has stochastic policy strategy for controlling the bipedal walking robot. The option thought deterministic and stochastic policy affects the exploration of DRL algorithm. The SAC is a Deep Reinforcement Learning (DRL) based algorithm whose improvement obtained through the augmented entropy-based expected return allows the SAC algorithm to learn faster, gain exploration ability, and still ensure convergence. The SAC algorithm’s performance is validated with a bipedal robot to walk towards the straight-line trajectory. The number of the reward and the cumulative reward during the training is used as the algorithm's performance evaluation. The SAC algorithm controls the bipedal walking robot well with a total reward of 384,752.8.\",\"PeriodicalId\":12085,\"journal\":{\"name\":\"Evergreen\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Evergreen\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5109/7151701\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Environmental Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evergreen","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5109/7151701","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Environmental Science","Score":null,"Total":0}
A Trajectory Control for Bipedal Walking Robot Using Stochastic-Based Continuous Deep Reinforcement Learning
: The bipedal walking robot is an advanced anthropomorphic robot that can mimic the human ability to walk. Controlling the bipedal walking robot is difficult due to its nonlinearity and complexity. To solve this problem, recent studies have applied various machine learning algorithms based on reinforcement learning approaches, however most of them rely on deterministic-policy-based strategy. This research proposes Soft Actor Critic (SAC), which has stochastic policy strategy for controlling the bipedal walking robot. The option thought deterministic and stochastic policy affects the exploration of DRL algorithm. The SAC is a Deep Reinforcement Learning (DRL) based algorithm whose improvement obtained through the augmented entropy-based expected return allows the SAC algorithm to learn faster, gain exploration ability, and still ensure convergence. The SAC algorithm’s performance is validated with a bipedal robot to walk towards the straight-line trajectory. The number of the reward and the cumulative reward during the training is used as the algorithm's performance evaluation. The SAC algorithm controls the bipedal walking robot well with a total reward of 384,752.8.
EvergreenEnvironmental Science-Management, Monitoring, Policy and Law
CiteScore
4.30
自引率
0.00%
发文量
99
期刊介绍:
“Evergreen - Joint Journal of Novel Carbon Resource Sciences & Green Asia Strategy” is a refereed international open access online journal, serving researchers in academic and research organizations and all practitioners in the science and technology to contribute to the realization of Green Asia where ecology and economic growth coexist. The scope of the journal involves the aspects of science, technology, economic and social science. Namely, Novel Carbon Resource Sciences, Green Asia Strategy, and other fields related to Asian environment should be included in this journal. The journal aims to contribute to resolve or mitigate the global and local problems in Asia by bringing together new ideas and developments. The editors welcome good quality contributions from all over the Asia.