{"title":"Emergent synthesis of motion patterns for locomotion robots","authors":"M.M. Svinin , K. Yamada , K. Ueda","doi":"10.1016/S0954-1810(01)00027-9","DOIUrl":null,"url":null,"abstract":"<div><p>Emergence of stable gaits in locomotion robots is studied in this paper. A classifier system, implementing an instance-based reinforcement-learning scheme, is used for the sensory-motor control of an eight-legged mobile robot and for the synthesis of the robot gaits. The robot does not have a priori knowledge of the environment and its own internal model. It is only assumed that the robot can acquire stable gaits by learning how to reach a goal area. During the learning process the control system is self-organized by reinforcement signals. Reaching the goal area defines a global reward. Forward motion gets a local reward, while stepping back and falling down get a local punishment. As learning progresses, the number of the action rules in the classifier systems is stabilized to a certain level, corresponding to the acquired gait patterns. Feasibility of the proposed self-organized system is tested under simulation and experiment. A minimal simulation model that does not require sophisticated computational schemes is constructed and used in simulations. The simulation data, evolved on the minimal model of the robot, is downloaded to the control system of the real robot. Overall, of 10 simulation data seven are successful in running the real robot.</p></div>","PeriodicalId":100123,"journal":{"name":"Artificial Intelligence in Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2001-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/S0954-1810(01)00027-9","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence in Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0954181001000279","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22
Abstract
Emergence of stable gaits in locomotion robots is studied in this paper. A classifier system, implementing an instance-based reinforcement-learning scheme, is used for the sensory-motor control of an eight-legged mobile robot and for the synthesis of the robot gaits. The robot does not have a priori knowledge of the environment and its own internal model. It is only assumed that the robot can acquire stable gaits by learning how to reach a goal area. During the learning process the control system is self-organized by reinforcement signals. Reaching the goal area defines a global reward. Forward motion gets a local reward, while stepping back and falling down get a local punishment. As learning progresses, the number of the action rules in the classifier systems is stabilized to a certain level, corresponding to the acquired gait patterns. Feasibility of the proposed self-organized system is tested under simulation and experiment. A minimal simulation model that does not require sophisticated computational schemes is constructed and used in simulations. The simulation data, evolved on the minimal model of the robot, is downloaded to the control system of the real robot. Overall, of 10 simulation data seven are successful in running the real robot.