Congyan Lv , Guangliang Liu , Yingnan Pan , Zhijian Hu , Yan Lei
{"title":"Event-based distributed cooperative neural learning control for nonlinear multiagent systems with time-varying output constraints","authors":"Congyan Lv , Guangliang Liu , Yingnan Pan , Zhijian Hu , Yan Lei","doi":"10.1016/j.neunet.2025.107383","DOIUrl":null,"url":null,"abstract":"<div><div>In practical engineering, many systems are required to operate under different constraint conditions due to considerations of system security. Violating these constraints conditions during operation may lead to performance degradation. Additionally, communication among agents is highly dependent on the network, which inevitably imposes a network burden on the control systems. To address these issues, this paper investigates the switching event-triggered distributed cooperative learning control issue for nonlinear multiagent systems with time-vary output constraints. An improved output-dependent universal barrier function with adjustable constraint boundaries is proposed, which can uniformly handle symmetric or asymmetric output constraints without changing the controller structure. Meanwhile, an improved switching event-triggered condition is designed based on neural networks (NNs) weight, which can allow the system to adaptively adjust the NNs weight update frequency according to the performance of the system, thereby saving communication resources. Furthermore, the Padé approximation technique is employed to address the input delay issue and simplify the controller design process. Using Lyapunov stability theory, it is proved that the outputs of all followers converge to a neighborhood around the leader output without violating output constraints, and all signals in the closed-loop system remain ultimately bounded. At last, the availability of the presented approach can be verified through some simulation results.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107383"},"PeriodicalIF":6.0000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S089360802500262X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In practical engineering, many systems are required to operate under different constraint conditions due to considerations of system security. Violating these constraints conditions during operation may lead to performance degradation. Additionally, communication among agents is highly dependent on the network, which inevitably imposes a network burden on the control systems. To address these issues, this paper investigates the switching event-triggered distributed cooperative learning control issue for nonlinear multiagent systems with time-vary output constraints. An improved output-dependent universal barrier function with adjustable constraint boundaries is proposed, which can uniformly handle symmetric or asymmetric output constraints without changing the controller structure. Meanwhile, an improved switching event-triggered condition is designed based on neural networks (NNs) weight, which can allow the system to adaptively adjust the NNs weight update frequency according to the performance of the system, thereby saving communication resources. Furthermore, the Padé approximation technique is employed to address the input delay issue and simplify the controller design process. Using Lyapunov stability theory, it is proved that the outputs of all followers converge to a neighborhood around the leader output without violating output constraints, and all signals in the closed-loop system remain ultimately bounded. At last, the availability of the presented approach can be verified through some simulation results.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.