{"title":"Collaborative control in a humanoid dynamic task","authors":"D. Pardo, C. Angulo","doi":"10.5220/0001629001740180","DOIUrl":null,"url":null,"abstract":"This paper describes a collaborative control scheme that governs the dynamic behavior of an articulated mobile robot with several degrees of freedom (DOF) and redundancies. These types of robots need a high level of coordination between the motors performance to complete their motions. In the employed scheme, the actuators involved in a specific task share information, computing integrated control actions. The control functions are found using a stochastic reinforcement learning technique allowing the robot to automatically generate them based on experiences. This type of control is based on a modularization principle: complex overall behavior is the result of the interaction of individual simple components. Unlike the standard procedures, this approach is not meant to follow a trajectory generated by a planner, instead, the trajectory emerges as a consequence of the collaboration between joints movements while seeking the achievement of a goal. The learning of the sensorimotor coordination in a simulated humanoid is presented as a demonstration.","PeriodicalId":302311,"journal":{"name":"ICINCO-RA","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICINCO-RA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0001629001740180","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper describes a collaborative control scheme that governs the dynamic behavior of an articulated mobile robot with several degrees of freedom (DOF) and redundancies. These types of robots need a high level of coordination between the motors performance to complete their motions. In the employed scheme, the actuators involved in a specific task share information, computing integrated control actions. The control functions are found using a stochastic reinforcement learning technique allowing the robot to automatically generate them based on experiences. This type of control is based on a modularization principle: complex overall behavior is the result of the interaction of individual simple components. Unlike the standard procedures, this approach is not meant to follow a trajectory generated by a planner, instead, the trajectory emerges as a consequence of the collaboration between joints movements while seeking the achievement of a goal. The learning of the sensorimotor coordination in a simulated humanoid is presented as a demonstration.