A distributed penalty-based zeroing neural network for time-varying optimization with both equality and inequality constraints and its application to cooperative control of redundant robot manipulators.
IF 2.6 4区 计算机科学Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
{"title":"A distributed penalty-based zeroing neural network for time-varying optimization with both equality and inequality constraints and its application to cooperative control of redundant robot manipulators.","authors":"Liu He, Hui Cheng, Yunong Zhang","doi":"10.3389/fnbot.2025.1553623","DOIUrl":null,"url":null,"abstract":"<p><p>This study addresses the distributed optimization problem with time-varying objective functions and time-varying constraints in a multi-agent system (MAS). To tackle the distributed time-varying constrained optimization (DTVCO) problem, each agent in the MAS communicates with its neighbors while relying solely on local information, such as its own objective function and constraints, to compute the optimal solution. We propose a novel penalty-based zeroing neural network (PB-ZNN) to solve the continuous-time DTVCO (CTDTVCO) problem. The PB-ZNN model incorporates two penalty functions: The first penalizes agents for deviating from the states of their neighbors, driving all agents to reach a consensus, and the second penalizes agents for falling outside the feasible range, ensuring that the solutions of all agents remain within the constraints. The PB-ZNN model solves the CTDTVCO problem in a semi-centralized manner, where information exchange between agents is distributed, but computation is centralized. Building on the semi-centralized PB-ZNN model, we adopt the Euler formula to develop a distributed PB-ZNN (DPB-ZNN) algorithm for solving the discrete-time DTVCO (DTDTVCO) problem in a fully distributed manner. We present and prove the convergence theorems of the proposed PB-ZNN model and DPB-ZNN algorithm. The efficacy and accuracy of the DPB-ZNN algorithm are illustrated through numerical examples, including a simulation experiment applying the algorithm to the cooperative control of redundant manipulators.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1553623"},"PeriodicalIF":2.6000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11955690/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Neurorobotics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.3389/fnbot.2025.1553623","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This study addresses the distributed optimization problem with time-varying objective functions and time-varying constraints in a multi-agent system (MAS). To tackle the distributed time-varying constrained optimization (DTVCO) problem, each agent in the MAS communicates with its neighbors while relying solely on local information, such as its own objective function and constraints, to compute the optimal solution. We propose a novel penalty-based zeroing neural network (PB-ZNN) to solve the continuous-time DTVCO (CTDTVCO) problem. The PB-ZNN model incorporates two penalty functions: The first penalizes agents for deviating from the states of their neighbors, driving all agents to reach a consensus, and the second penalizes agents for falling outside the feasible range, ensuring that the solutions of all agents remain within the constraints. The PB-ZNN model solves the CTDTVCO problem in a semi-centralized manner, where information exchange between agents is distributed, but computation is centralized. Building on the semi-centralized PB-ZNN model, we adopt the Euler formula to develop a distributed PB-ZNN (DPB-ZNN) algorithm for solving the discrete-time DTVCO (DTDTVCO) problem in a fully distributed manner. We present and prove the convergence theorems of the proposed PB-ZNN model and DPB-ZNN algorithm. The efficacy and accuracy of the DPB-ZNN algorithm are illustrated through numerical examples, including a simulation experiment applying the algorithm to the cooperative control of redundant manipulators.
期刊介绍:
Frontiers in Neurorobotics publishes rigorously peer-reviewed research in the science and technology of embodied autonomous neural systems. Specialty Chief Editors Alois C. Knoll and Florian Röhrbein at the Technische Universität München are supported by an outstanding Editorial Board of international experts. This multidisciplinary open-access journal is at the forefront of disseminating and communicating scientific knowledge and impactful discoveries to researchers, academics and the public worldwide.
Neural systems include brain-inspired algorithms (e.g. connectionist networks), computational models of biological neural networks (e.g. artificial spiking neural nets, large-scale simulations of neural microcircuits) and actual biological systems (e.g. in vivo and in vitro neural nets). The focus of the journal is the embodiment of such neural systems in artificial software and hardware devices, machines, robots or any other form of physical actuation. This also includes prosthetic devices, brain machine interfaces, wearable systems, micro-machines, furniture, home appliances, as well as systems for managing micro and macro infrastructures. Frontiers in Neurorobotics also aims to publish radically new tools and methods to study plasticity and development of autonomous self-learning systems that are capable of acquiring knowledge in an open-ended manner. Models complemented with experimental studies revealing self-organizing principles of embodied neural systems are welcome. Our journal also publishes on the micro and macro engineering and mechatronics of robotic devices driven by neural systems, as well as studies on the impact that such systems will have on our daily life.