{"title":"Reinforcement learning-based adaptive event-triggered control of multi-agent systems with time-varying dead-zone","authors":"Xin Li , Dakuo He , Qiang Zhang , Hailong Liu","doi":"10.1016/j.amc.2024.129059","DOIUrl":null,"url":null,"abstract":"<div><p>In this paper, a novel reinforcement learning (RL)-based adaptive event-triggered control problem is studied for non-affine multi-agent systems (MASs) with time-varying dead-zone. The purpose is to design an efficient event-triggered mechanism to achieve optimal control of MASs. Compared with the existing results, an improved smooth event-triggered mechanism is proposed, which not only overcomes the design difficulties caused by discontinuous trigger signals, but also reduces the waste of communication resources. In order to achieve optimal event-triggered control, RL algorithm of the identifier-critic-actor structure based on fuzzy logic systems (FLSs) is applied to estimate system dynamics, evaluate control performance, and execute control behavior, respectively. In addition, considering time-varying dead-zone in non-affine MASs brings obstacles to controller design, which makes system applications more generalized. Through Lyapunov theory, it is proved that the optimal control performance can be achieved and the tracking error converges to a small neighborhood of the origin. Finally, simulation proves the feasibility of the proposed method.</p></div>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0096300324005204","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, a novel reinforcement learning (RL)-based adaptive event-triggered control problem is studied for non-affine multi-agent systems (MASs) with time-varying dead-zone. The purpose is to design an efficient event-triggered mechanism to achieve optimal control of MASs. Compared with the existing results, an improved smooth event-triggered mechanism is proposed, which not only overcomes the design difficulties caused by discontinuous trigger signals, but also reduces the waste of communication resources. In order to achieve optimal event-triggered control, RL algorithm of the identifier-critic-actor structure based on fuzzy logic systems (FLSs) is applied to estimate system dynamics, evaluate control performance, and execute control behavior, respectively. In addition, considering time-varying dead-zone in non-affine MASs brings obstacles to controller design, which makes system applications more generalized. Through Lyapunov theory, it is proved that the optimal control performance can be achieved and the tracking error converges to a small neighborhood of the origin. Finally, simulation proves the feasibility of the proposed method.
本文研究了基于强化学习(RL)的新型自适应事件触发控制问题,该问题适用于具有时变死区的非仿真多代理系统(MAS)。其目的是设计一种高效的事件触发机制,以实现 MAS 的最优控制。与现有成果相比,本文提出了一种改进的平滑事件触发机制,不仅克服了触发信号不连续造成的设计困难,而且减少了通信资源的浪费。为了实现最优的事件触发控制,应用了基于模糊逻辑系统(FLS)的识别器-批判者-作用者结构的 RL 算法,分别用于估计系统动态、评估控制性能和执行控制行为。此外,在非仿真 MAS 中考虑时变死区会给控制器设计带来障碍,从而使系统应用更加广泛。通过李亚普诺夫理论,证明了可以实现最优控制性能,并且跟踪误差收敛到原点的小邻域。最后,仿真证明了所提方法的可行性。