Reinforcement learning-based adaptive event-triggered control of multi-agent systems with time-varying dead-zone

IF 4.3 3区 材料科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Xin Li , Dakuo He , Qiang Zhang , Hailong Liu
{"title":"Reinforcement learning-based adaptive event-triggered control of multi-agent systems with time-varying dead-zone","authors":"Xin Li ,&nbsp;Dakuo He ,&nbsp;Qiang Zhang ,&nbsp;Hailong Liu","doi":"10.1016/j.amc.2024.129059","DOIUrl":null,"url":null,"abstract":"<div><p>In this paper, a novel reinforcement learning (RL)-based adaptive event-triggered control problem is studied for non-affine multi-agent systems (MASs) with time-varying dead-zone. The purpose is to design an efficient event-triggered mechanism to achieve optimal control of MASs. Compared with the existing results, an improved smooth event-triggered mechanism is proposed, which not only overcomes the design difficulties caused by discontinuous trigger signals, but also reduces the waste of communication resources. In order to achieve optimal event-triggered control, RL algorithm of the identifier-critic-actor structure based on fuzzy logic systems (FLSs) is applied to estimate system dynamics, evaluate control performance, and execute control behavior, respectively. In addition, considering time-varying dead-zone in non-affine MASs brings obstacles to controller design, which makes system applications more generalized. Through Lyapunov theory, it is proved that the optimal control performance can be achieved and the tracking error converges to a small neighborhood of the origin. Finally, simulation proves the feasibility of the proposed method.</p></div>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0096300324005204","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, a novel reinforcement learning (RL)-based adaptive event-triggered control problem is studied for non-affine multi-agent systems (MASs) with time-varying dead-zone. The purpose is to design an efficient event-triggered mechanism to achieve optimal control of MASs. Compared with the existing results, an improved smooth event-triggered mechanism is proposed, which not only overcomes the design difficulties caused by discontinuous trigger signals, but also reduces the waste of communication resources. In order to achieve optimal event-triggered control, RL algorithm of the identifier-critic-actor structure based on fuzzy logic systems (FLSs) is applied to estimate system dynamics, evaluate control performance, and execute control behavior, respectively. In addition, considering time-varying dead-zone in non-affine MASs brings obstacles to controller design, which makes system applications more generalized. Through Lyapunov theory, it is proved that the optimal control performance can be achieved and the tracking error converges to a small neighborhood of the origin. Finally, simulation proves the feasibility of the proposed method.

基于强化学习的具有时变死区的多代理系统的自适应事件触发控制
本文研究了基于强化学习(RL)的新型自适应事件触发控制问题,该问题适用于具有时变死区的非仿真多代理系统(MAS)。其目的是设计一种高效的事件触发机制,以实现 MAS 的最优控制。与现有成果相比,本文提出了一种改进的平滑事件触发机制,不仅克服了触发信号不连续造成的设计困难,而且减少了通信资源的浪费。为了实现最优的事件触发控制,应用了基于模糊逻辑系统(FLS)的识别器-批判者-作用者结构的 RL 算法,分别用于估计系统动态、评估控制性能和执行控制行为。此外,在非仿真 MAS 中考虑时变死区会给控制器设计带来障碍,从而使系统应用更加广泛。通过李亚普诺夫理论,证明了可以实现最优控制性能,并且跟踪误差收敛到原点的小邻域。最后,仿真证明了所提方法的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.20
自引率
4.30%
发文量
567
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信