Daksh Shukla;Hady Benyamen;Shawn Keshmiri;Nicole M. Beckage
{"title":"Reinforcement Learning-Based Evolving Flight Controller for Fixed-Wing Uncrewed Aircraft","authors":"Daksh Shukla;Hady Benyamen;Shawn Keshmiri;Nicole M. Beckage","doi":"10.1109/TCST.2024.3516383","DOIUrl":null,"url":null,"abstract":"A significant challenge in designing flight controllers lies in their dependency on the quality of dynamic models. This research explores the potential of artificial intelligence-based flight controllers to generalize control actions around policies rather than relying solely on the accuracy of dynamic models. An engineering-level, low-fidelity, linearized model of fixed-wing uncrewed aircraft is used to train a multi-input multi-output (MIMO) flight controller, employing the deep deterministic policy gradients (DDPG) algorithm, to maintain cruise velocity and altitude. While existing literature often concentrates on simulation-based assessments of reinforcement learning (RL)-based flight controllers, this research employs an extensive flight test campaign including 15 flight tests to explore the reliability, robustness, and generalization capability of RL algorithms in tasks they were not specifically trained for, such as changing cruise altitude and velocity. The RL controller outperformed a well-tuned linear quadratic regulator (LQR) on several control tasks. Furthermore, a modification in the DDPG algorithm is presented to enhance the ability of RL controllers to evolve through experience gained from actual flights. The evolved controllers present different behavior compared to the original controller. Comparative flight tests underscored the crucial role of the ratio of actual flight data to the number of simulation-based training instances in optimizing the evolved controllers.","PeriodicalId":13103,"journal":{"name":"IEEE Transactions on Control Systems Technology","volume":"33 3","pages":"872-886"},"PeriodicalIF":4.9000,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Control Systems Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10807509/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
A significant challenge in designing flight controllers lies in their dependency on the quality of dynamic models. This research explores the potential of artificial intelligence-based flight controllers to generalize control actions around policies rather than relying solely on the accuracy of dynamic models. An engineering-level, low-fidelity, linearized model of fixed-wing uncrewed aircraft is used to train a multi-input multi-output (MIMO) flight controller, employing the deep deterministic policy gradients (DDPG) algorithm, to maintain cruise velocity and altitude. While existing literature often concentrates on simulation-based assessments of reinforcement learning (RL)-based flight controllers, this research employs an extensive flight test campaign including 15 flight tests to explore the reliability, robustness, and generalization capability of RL algorithms in tasks they were not specifically trained for, such as changing cruise altitude and velocity. The RL controller outperformed a well-tuned linear quadratic regulator (LQR) on several control tasks. Furthermore, a modification in the DDPG algorithm is presented to enhance the ability of RL controllers to evolve through experience gained from actual flights. The evolved controllers present different behavior compared to the original controller. Comparative flight tests underscored the crucial role of the ratio of actual flight data to the number of simulation-based training instances in optimizing the evolved controllers.
期刊介绍:
The IEEE Transactions on Control Systems Technology publishes high quality technical papers on technological advances in control engineering. The word technology is from the Greek technologia. The modern meaning is a scientific method to achieve a practical purpose. Control Systems Technology includes all aspects of control engineering needed to implement practical control systems, from analysis and design, through simulation and hardware. A primary purpose of the IEEE Transactions on Control Systems Technology is to have an archival publication which will bridge the gap between theory and practice. Papers are published in the IEEE Transactions on Control System Technology which disclose significant new knowledge, exploratory developments, or practical applications in all aspects of technology needed to implement control systems, from analysis and design through simulation, and hardware.