{"title":"Hierarchical reinforcement learning for enhancing stability and adaptability of hexapod robots in complex terrains","authors":"Shichang Huang , Zhihan Xiao , Minhua Zheng , Wen Shi","doi":"10.1016/j.birob.2025.100231","DOIUrl":null,"url":null,"abstract":"<div><div>In the field of hexapod robot control, the application of central pattern generators (CPG) and deep reinforcement learning (DRL) is becoming increasingly common. Compared to traditional control methods that rely on dynamic models, both the CPG and the end-to-end DRL approaches significantly simplify the complexity of designing control models. However, relying solely on DRL for control also has its drawbacks, such as slow convergence speed and low exploration efficiency. Moreover, although the CPG can produce rhythmic gaits, its control strategy is relatively singular, limiting the robot’s ability to adapt to complex terrains. To overcome these limitations, this study proposes a three-layer DRL control architecture. The high-level reinforcement learning controller is responsible for learning the parameters of the middle-level CPG and the low-level mapping functions, while the middle and low level controllers coordinate the joint movements within and between legs. By integrating the learning capabilities of DRL with the gait generation characteristics of CPG, this method significantly enhances the stability and adaptability of hexapod robots in complex terrains. Experimental results show that, compared to pure DRL approaches, this method significantly improves learning efficiency and control performance, when dealing with complex terrains, it considerably enhances the robot’s stability and adaptability compared to pure CPG control.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 3","pages":"Article 100231"},"PeriodicalIF":5.4000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomimetic Intelligence and Robotics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667379725000221","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the field of hexapod robot control, the application of central pattern generators (CPG) and deep reinforcement learning (DRL) is becoming increasingly common. Compared to traditional control methods that rely on dynamic models, both the CPG and the end-to-end DRL approaches significantly simplify the complexity of designing control models. However, relying solely on DRL for control also has its drawbacks, such as slow convergence speed and low exploration efficiency. Moreover, although the CPG can produce rhythmic gaits, its control strategy is relatively singular, limiting the robot’s ability to adapt to complex terrains. To overcome these limitations, this study proposes a three-layer DRL control architecture. The high-level reinforcement learning controller is responsible for learning the parameters of the middle-level CPG and the low-level mapping functions, while the middle and low level controllers coordinate the joint movements within and between legs. By integrating the learning capabilities of DRL with the gait generation characteristics of CPG, this method significantly enhances the stability and adaptability of hexapod robots in complex terrains. Experimental results show that, compared to pure DRL approaches, this method significantly improves learning efficiency and control performance, when dealing with complex terrains, it considerably enhances the robot’s stability and adaptability compared to pure CPG control.