基于强化学习的未知系统模型航天器姿态容错控制

IF 4.2 3区 计算机科学 Q2 AUTOMATION & CONTROL SYSTEMS
Shaolong Yang , Lei Jin , Jiaxuan Rao
{"title":"基于强化学习的未知系统模型航天器姿态容错控制","authors":"Shaolong Yang ,&nbsp;Lei Jin ,&nbsp;Jiaxuan Rao","doi":"10.1016/j.jfranklin.2025.107741","DOIUrl":null,"url":null,"abstract":"<div><div>With the increasing reliability and safety requirements of spacecraft control system, it is urgent to study effective fault-tolerant control methods to ensure that the control system can still maintain high control performance when the actuator fails. Considering the uncertainty and suddenness of faults, it is very important for fault-tolerant control to have strong adaptability and high real-time performance. Therefore, a fault-tolerant attitude controller based on reinforcement learning for a spacecraft with unknown system model is proposed in this paper. Firstly, by ignoring the effects of inertia uncertainty, actuator faults, and external disturbances, the reinforcement learning algorithm is combined with optimal control to design an offline approximate optimal control policy for the known nominal system. Next, a neural network-based observer is designed to estimate the system input matrix and approximate the unknown system dynamics, eliminating the offline nominal controller's dependency on the system model and mitigating the impact of multiplicative actuator faults on attitude control. Subsequently, the offline nominal controller is employed as the initial control strategy, and the parameters of the critic network are updated online using the RLS method. This approach results in an online reinforcement learning controller that enhances the algorithm's real-time performance. In addition, by utilizing the partial disturbances estimated by the neural network-based observer, a feedforward compensation algorithm is incorporated to counteract the adverse effects of additive actuator faults and external disturbance torques on attitude control performance, completing the online fault-tolerant control scheme based on reinforcement learning. Lastly, the stability of the control system is proved by the Lyapunov method, and the validity of the proposed fault-tolerant control is illustrated through simulations.</div></div>","PeriodicalId":17283,"journal":{"name":"Journal of The Franklin Institute-engineering and Applied Mathematics","volume":"362 10","pages":"Article 107741"},"PeriodicalIF":4.2000,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning based attitude fault-tolerant control of spacecraft with unknown system model\",\"authors\":\"Shaolong Yang ,&nbsp;Lei Jin ,&nbsp;Jiaxuan Rao\",\"doi\":\"10.1016/j.jfranklin.2025.107741\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the increasing reliability and safety requirements of spacecraft control system, it is urgent to study effective fault-tolerant control methods to ensure that the control system can still maintain high control performance when the actuator fails. Considering the uncertainty and suddenness of faults, it is very important for fault-tolerant control to have strong adaptability and high real-time performance. Therefore, a fault-tolerant attitude controller based on reinforcement learning for a spacecraft with unknown system model is proposed in this paper. Firstly, by ignoring the effects of inertia uncertainty, actuator faults, and external disturbances, the reinforcement learning algorithm is combined with optimal control to design an offline approximate optimal control policy for the known nominal system. Next, a neural network-based observer is designed to estimate the system input matrix and approximate the unknown system dynamics, eliminating the offline nominal controller's dependency on the system model and mitigating the impact of multiplicative actuator faults on attitude control. Subsequently, the offline nominal controller is employed as the initial control strategy, and the parameters of the critic network are updated online using the RLS method. This approach results in an online reinforcement learning controller that enhances the algorithm's real-time performance. In addition, by utilizing the partial disturbances estimated by the neural network-based observer, a feedforward compensation algorithm is incorporated to counteract the adverse effects of additive actuator faults and external disturbance torques on attitude control performance, completing the online fault-tolerant control scheme based on reinforcement learning. Lastly, the stability of the control system is proved by the Lyapunov method, and the validity of the proposed fault-tolerant control is illustrated through simulations.</div></div>\",\"PeriodicalId\":17283,\"journal\":{\"name\":\"Journal of The Franklin Institute-engineering and Applied Mathematics\",\"volume\":\"362 10\",\"pages\":\"Article 107741\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-05-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of The Franklin Institute-engineering and Applied Mathematics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0016003225002340\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of The Franklin Institute-engineering and Applied Mathematics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0016003225002340","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

随着航天器控制系统可靠性和安全性要求的不断提高,迫切需要研究有效的容错控制方法,以保证在作动器发生故障时控制系统仍能保持较高的控制性能。考虑到故障的不确定性和突发性,容错控制具有较强的自适应性和较高的实时性是非常重要的。为此,本文提出了一种基于强化学习的未知系统模型航天器容错姿态控制器。首先,忽略惯性不确定性、执行器故障和外部干扰的影响,将强化学习算法与最优控制相结合,设计已知标称系统的离线近似最优控制策略;其次,设计了基于神经网络的观测器来估计系统输入矩阵并近似未知的系统动力学,消除了离线标称控制器对系统模型的依赖,减轻了乘性执行器故障对姿态控制的影响。随后,采用离线标称控制器作为初始控制策略,使用RLS方法在线更新批评网络的参数。这种方法产生了一个在线强化学习控制器,提高了算法的实时性。此外,利用基于神经网络的观测器估计的部分扰动,引入前馈补偿算法来抵消附加执行器故障和外部扰动力矩对姿态控制性能的不利影响,完成基于强化学习的在线容错控制方案。最后,利用李雅普诺夫方法证明了控制系统的稳定性,并通过仿真验证了所提容错控制的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reinforcement learning based attitude fault-tolerant control of spacecraft with unknown system model
With the increasing reliability and safety requirements of spacecraft control system, it is urgent to study effective fault-tolerant control methods to ensure that the control system can still maintain high control performance when the actuator fails. Considering the uncertainty and suddenness of faults, it is very important for fault-tolerant control to have strong adaptability and high real-time performance. Therefore, a fault-tolerant attitude controller based on reinforcement learning for a spacecraft with unknown system model is proposed in this paper. Firstly, by ignoring the effects of inertia uncertainty, actuator faults, and external disturbances, the reinforcement learning algorithm is combined with optimal control to design an offline approximate optimal control policy for the known nominal system. Next, a neural network-based observer is designed to estimate the system input matrix and approximate the unknown system dynamics, eliminating the offline nominal controller's dependency on the system model and mitigating the impact of multiplicative actuator faults on attitude control. Subsequently, the offline nominal controller is employed as the initial control strategy, and the parameters of the critic network are updated online using the RLS method. This approach results in an online reinforcement learning controller that enhances the algorithm's real-time performance. In addition, by utilizing the partial disturbances estimated by the neural network-based observer, a feedforward compensation algorithm is incorporated to counteract the adverse effects of additive actuator faults and external disturbance torques on attitude control performance, completing the online fault-tolerant control scheme based on reinforcement learning. Lastly, the stability of the control system is proved by the Lyapunov method, and the validity of the proposed fault-tolerant control is illustrated through simulations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.30
自引率
14.60%
发文量
586
审稿时长
6.9 months
期刊介绍: The Journal of The Franklin Institute has an established reputation for publishing high-quality papers in the field of engineering and applied mathematics. Its current focus is on control systems, complex networks and dynamic systems, signal processing and communications and their applications. All submitted papers are peer-reviewed. The Journal will publish original research papers and research review papers of substance. Papers and special focus issues are judged upon possible lasting value, which has been and continues to be the strength of the Journal of The Franklin Institute.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信