{"title":"基于粒子群优化神经网络的输入约束非零和博弈事件触发控制","authors":"Qiuye Wu , Bo Zhao , Derong Liu","doi":"10.1016/j.neunet.2025.107430","DOIUrl":null,"url":null,"abstract":"<div><div>To accommodate the increasing system scale, improve the system operation success rate and save the computational and communication resources, it is urgent to obtain the Nash equilibrium solution for systems with increasing controllers in an effective way. In this paper, nonzero-sum game problem of partially unknown nonlinear systems with input constraints is solved via the particle swarm optimized neural network-based integral reinforcement learning. By introducing the integral reinforcement learning technique, the drift dynamics is not required any more. To further improve the success rate of system operation, extended adaptive particle swarm optimization algorithm which shares the individual historical optimal position with the whole population is adopted in tuning neural network weights, rather than sharing only the current particle in the traditional particle swarm optimization algorithm. The control policy for each player is obtained by solving the coupled Hamilton–Jacobi equation with a single critic neural network, which simplifies the control structure and reduces the computational burden. Moreover, by introducing the event-triggering mechanism, the control policies are updated at event-triggering instants only. Thus, the computational and communication burdens are further reduced. The stability of the closed-loop system is guaranteed by implementing the integral reinforcement learning-based event-triggered control policies via the Lyapunov’s direct method. From the comparative simulation results, the developed integral reinforcement learning-based event-triggered control scheme via the extended adaptive particle swarm optimization performs better than those using gradient descent algorithm, nonlinear programming, particle swarm optimization and other popular training algorithms.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107430"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Event-triggered control for input-constrained nonzero-sum games through particle swarm optimized neural networks\",\"authors\":\"Qiuye Wu , Bo Zhao , Derong Liu\",\"doi\":\"10.1016/j.neunet.2025.107430\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>To accommodate the increasing system scale, improve the system operation success rate and save the computational and communication resources, it is urgent to obtain the Nash equilibrium solution for systems with increasing controllers in an effective way. In this paper, nonzero-sum game problem of partially unknown nonlinear systems with input constraints is solved via the particle swarm optimized neural network-based integral reinforcement learning. By introducing the integral reinforcement learning technique, the drift dynamics is not required any more. To further improve the success rate of system operation, extended adaptive particle swarm optimization algorithm which shares the individual historical optimal position with the whole population is adopted in tuning neural network weights, rather than sharing only the current particle in the traditional particle swarm optimization algorithm. The control policy for each player is obtained by solving the coupled Hamilton–Jacobi equation with a single critic neural network, which simplifies the control structure and reduces the computational burden. Moreover, by introducing the event-triggering mechanism, the control policies are updated at event-triggering instants only. Thus, the computational and communication burdens are further reduced. The stability of the closed-loop system is guaranteed by implementing the integral reinforcement learning-based event-triggered control policies via the Lyapunov’s direct method. From the comparative simulation results, the developed integral reinforcement learning-based event-triggered control scheme via the extended adaptive particle swarm optimization performs better than those using gradient descent algorithm, nonlinear programming, particle swarm optimization and other popular training algorithms.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"188 \",\"pages\":\"Article 107430\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025003090\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025003090","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Event-triggered control for input-constrained nonzero-sum games through particle swarm optimized neural networks
To accommodate the increasing system scale, improve the system operation success rate and save the computational and communication resources, it is urgent to obtain the Nash equilibrium solution for systems with increasing controllers in an effective way. In this paper, nonzero-sum game problem of partially unknown nonlinear systems with input constraints is solved via the particle swarm optimized neural network-based integral reinforcement learning. By introducing the integral reinforcement learning technique, the drift dynamics is not required any more. To further improve the success rate of system operation, extended adaptive particle swarm optimization algorithm which shares the individual historical optimal position with the whole population is adopted in tuning neural network weights, rather than sharing only the current particle in the traditional particle swarm optimization algorithm. The control policy for each player is obtained by solving the coupled Hamilton–Jacobi equation with a single critic neural network, which simplifies the control structure and reduces the computational burden. Moreover, by introducing the event-triggering mechanism, the control policies are updated at event-triggering instants only. Thus, the computational and communication burdens are further reduced. The stability of the closed-loop system is guaranteed by implementing the integral reinforcement learning-based event-triggered control policies via the Lyapunov’s direct method. From the comparative simulation results, the developed integral reinforcement learning-based event-triggered control scheme via the extended adaptive particle swarm optimization performs better than those using gradient descent algorithm, nonlinear programming, particle swarm optimization and other popular training algorithms.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.