{"title":"利用基于策略迭代的自适应动态编程为具有 DoS 攻击的离散-时非线性马尔可夫跃迁系统设计自适应事件触发控制和观测器","authors":"Hongqian Lu, Haobo Xing, Wuneng Zhou","doi":"10.1002/oca.3142","DOIUrl":null,"url":null,"abstract":"This article addresses adaptive event‐triggered discrete‐time nonlinear Markov jump systems (MJs) with DoS attacks, where the introduced DoS attacks are considered as more general stochastic models with fixed trigger frequency and duration. To solve the optimal control problem, we use an adaptive dynamic programming (ADP) algorithm based on policy iteration (PI). The approximate process is as follows: the performance index function (PIF) is first updated by the iteration policy in advance, and the control policy is obtained from the PIF. Subsequently, an approximate estimation of the optimal PIF and the optimal control policy is made using the actor‐critic structure obtained through neural network techniques. In order to reduce the occupied communication resources required for control policy iteration, we introduce an adaptive event triggering mechanism with an adaptive triggering threshold, which reduces the conservatism of resource occupation by the PIF compared to the fixed‐threshold ETM. In addition, an observer identifying the unknown dynamics part of the system is designed. Finally, using the Lyapunov function, it is shown that the designed control policy ensures the stability and convergence of the MJS, and the designed observer is effective. Simulation examples are given to verify the feasibility of the controller and the observer.","PeriodicalId":501055,"journal":{"name":"Optimal Control Applications and Methods","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive event‐triggered control and observer design for discrete‐time nonlinear Markov jump systems with DoS attacks using policy iteration‐based adaptive dynamic programming\",\"authors\":\"Hongqian Lu, Haobo Xing, Wuneng Zhou\",\"doi\":\"10.1002/oca.3142\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article addresses adaptive event‐triggered discrete‐time nonlinear Markov jump systems (MJs) with DoS attacks, where the introduced DoS attacks are considered as more general stochastic models with fixed trigger frequency and duration. To solve the optimal control problem, we use an adaptive dynamic programming (ADP) algorithm based on policy iteration (PI). The approximate process is as follows: the performance index function (PIF) is first updated by the iteration policy in advance, and the control policy is obtained from the PIF. Subsequently, an approximate estimation of the optimal PIF and the optimal control policy is made using the actor‐critic structure obtained through neural network techniques. In order to reduce the occupied communication resources required for control policy iteration, we introduce an adaptive event triggering mechanism with an adaptive triggering threshold, which reduces the conservatism of resource occupation by the PIF compared to the fixed‐threshold ETM. In addition, an observer identifying the unknown dynamics part of the system is designed. Finally, using the Lyapunov function, it is shown that the designed control policy ensures the stability and convergence of the MJS, and the designed observer is effective. Simulation examples are given to verify the feasibility of the controller and the observer.\",\"PeriodicalId\":501055,\"journal\":{\"name\":\"Optimal Control Applications and Methods\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optimal Control Applications and Methods\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/oca.3142\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optimal Control Applications and Methods","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/oca.3142","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
本文探讨了具有 DoS 攻击的自适应事件触发离散时间非线性马尔可夫跃迁系统(MJ),其中引入的 DoS 攻击被视为具有固定触发频率和持续时间的更一般的随机模型。为了解决最优控制问题,我们使用了基于策略迭代(PI)的自适应动态编程(ADP)算法。其近似过程如下:首先根据迭代策略提前更新性能指标函数(PIF),然后根据 PIF 获取控制策略。随后,利用神经网络技术获得的行动者批判结构对最优 PIF 和最优控制政策进行近似估计。为了减少控制策略迭代所需的通信资源占用,我们引入了具有自适应触发阈值的自适应事件触发机制,与固定阈值的 ETM 相比,该机制减少了 PIF 对资源占用的保守性。此外,我们还设计了一个能识别系统未知动态部分的观测器。最后,利用 Lyapunov 函数证明了所设计的控制策略能确保 MJS 的稳定性和收敛性,并且所设计的观测器是有效的。仿真实例验证了控制器和观测器的可行性。
Adaptive event‐triggered control and observer design for discrete‐time nonlinear Markov jump systems with DoS attacks using policy iteration‐based adaptive dynamic programming
This article addresses adaptive event‐triggered discrete‐time nonlinear Markov jump systems (MJs) with DoS attacks, where the introduced DoS attacks are considered as more general stochastic models with fixed trigger frequency and duration. To solve the optimal control problem, we use an adaptive dynamic programming (ADP) algorithm based on policy iteration (PI). The approximate process is as follows: the performance index function (PIF) is first updated by the iteration policy in advance, and the control policy is obtained from the PIF. Subsequently, an approximate estimation of the optimal PIF and the optimal control policy is made using the actor‐critic structure obtained through neural network techniques. In order to reduce the occupied communication resources required for control policy iteration, we introduce an adaptive event triggering mechanism with an adaptive triggering threshold, which reduces the conservatism of resource occupation by the PIF compared to the fixed‐threshold ETM. In addition, an observer identifying the unknown dynamics part of the system is designed. Finally, using the Lyapunov function, it is shown that the designed control policy ensures the stability and convergence of the MJS, and the designed observer is effective. Simulation examples are given to verify the feasibility of the controller and the observer.