Using Q-learning to Automatically Tune Quadcopter PID Controller Online for Fast Altitude Stabilization

Y. Alrubyli, Andrea Bonarini
{"title":"Using Q-learning to Automatically Tune Quadcopter PID Controller Online for Fast Altitude Stabilization","authors":"Y. Alrubyli, Andrea Bonarini","doi":"10.1109/ICMA54519.2022.9856292","DOIUrl":null,"url":null,"abstract":"Unmanned Arial Vehicles (UAVs), and more specifically, quadcopters need to be stable during their flights. Altitude stability is usually achieved by using a PID controller that is built into the flight controller software. Furthermore, the PID controller has gains that need to be tuned to reach optimal altitude stabilization during the quadcopter’s flight. For that, control system engineers need to tune those gains by using extensive modeling of the environment, which might change from one environment and condition to another. As quadcopters penetrate more sectors from the military to the consumer sectors, they have been put into complex and challenging environments more than ever before. Hence, intelligent self-stabilizing quadcopters are needed to maneuver through those complex environments and situations. Here we show that by using online reinforcement learning with minimal background knowledge, the altitude stability of the quadcopter can be achieved using a model-free approach. We found that by using background knowledge and an activation function like Sigmoid, altitude stabilization can be achieved faster with a small memory footprint. In addition, using this approach will accelerate development by avoiding extensive simulations before applying the PID gains to the real-world quadcopter. Our results demonstrate the possibility of using the trial and error approach of reinforcement learning combined with activation function and background knowledge to achieve faster quadcopter altitude stabilization in different environments and conditions.","PeriodicalId":120073,"journal":{"name":"2022 IEEE International Conference on Mechatronics and Automation (ICMA)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Mechatronics and Automation (ICMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMA54519.2022.9856292","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Unmanned Arial Vehicles (UAVs), and more specifically, quadcopters need to be stable during their flights. Altitude stability is usually achieved by using a PID controller that is built into the flight controller software. Furthermore, the PID controller has gains that need to be tuned to reach optimal altitude stabilization during the quadcopter’s flight. For that, control system engineers need to tune those gains by using extensive modeling of the environment, which might change from one environment and condition to another. As quadcopters penetrate more sectors from the military to the consumer sectors, they have been put into complex and challenging environments more than ever before. Hence, intelligent self-stabilizing quadcopters are needed to maneuver through those complex environments and situations. Here we show that by using online reinforcement learning with minimal background knowledge, the altitude stability of the quadcopter can be achieved using a model-free approach. We found that by using background knowledge and an activation function like Sigmoid, altitude stabilization can be achieved faster with a small memory footprint. In addition, using this approach will accelerate development by avoiding extensive simulations before applying the PID gains to the real-world quadcopter. Our results demonstrate the possibility of using the trial and error approach of reinforcement learning combined with activation function and background knowledge to achieve faster quadcopter altitude stabilization in different environments and conditions.
利用q -学习在线自动整定四轴飞行器PID控制器实现快速高度稳定
无人驾驶飞行器(uav),更具体地说,四轴飞行器需要在飞行过程中保持稳定。高度稳定通常是通过使用内置于飞行控制器软件中的PID控制器来实现的。此外,PID控制器有增益,需要调整,以达到最佳的高度稳定在四轴飞行器的飞行。为此,控制系统工程师需要通过使用广泛的环境建模来调整这些增益,环境和条件可能会随着环境和条件的变化而变化。随着四轴飞行器渗透到从军事到消费领域的更多领域,它们比以往任何时候都更容易进入复杂和具有挑战性的环境。因此,需要智能自稳定四轴飞行器在这些复杂的环境和情况下进行机动。在这里,我们展示了通过使用最小背景知识的在线强化学习,可以使用无模型方法实现四轴飞行器的高度稳定性。我们发现,通过使用背景知识和Sigmoid等激活函数,可以在占用较小内存的情况下更快地实现高度稳定。此外,在将PID增益应用于现实世界的四轴飞行器之前,使用这种方法将避免大量的模拟,从而加速开发。我们的研究结果表明,使用强化学习的试错方法结合激活函数和背景知识,可以在不同的环境和条件下实现更快的四轴飞行器高度稳定。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信