{"title":"Using Actor-Critic Reinforcement Learning for Control and Flight Formation of Quadrotors","authors":"Edgar Torres, Lei Xu, Tohid Sardarmehni","doi":"10.1115/imece2022-97224","DOIUrl":null,"url":null,"abstract":"\n This paper introduces a near-optimal controller for the control of quadrotors. A quadrotor is described as a complex, twelve-state system. The paper simplifies the controller by considering it as two levels, the upper-level (kinematics) six-state controller and the lower-level (kinetics) twelve-state controller. An actor-critic optimal controller generates the desired velocities in the upper-level control, and its parameters are tuned by reinforcement learning. The desired velocities are generated using the upper-level controller, which is then used to solve for the lower-level control algebraically. Simulation results are provided to show the effectiveness of the solution.","PeriodicalId":302047,"journal":{"name":"Volume 5: Dynamics, Vibration, and Control","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 5: Dynamics, Vibration, and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/imece2022-97224","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper introduces a near-optimal controller for the control of quadrotors. A quadrotor is described as a complex, twelve-state system. The paper simplifies the controller by considering it as two levels, the upper-level (kinematics) six-state controller and the lower-level (kinetics) twelve-state controller. An actor-critic optimal controller generates the desired velocities in the upper-level control, and its parameters are tuned by reinforcement learning. The desired velocities are generated using the upper-level controller, which is then used to solve for the lower-level control algebraically. Simulation results are provided to show the effectiveness of the solution.