{"title":"Adaptive discrete-time neural prescribed performance control: A safe control approach.","authors":"Zhonghua Wu, Bo Huang, Xiangwei Bu","doi":"10.1016/j.neunet.2024.107025","DOIUrl":null,"url":null,"abstract":"<p><p>Most existing results on prescribed performance control (PPC), subject to input saturation and initial condition limitations, focus on continuous-time nonlinear systems. This article, as regards discrete-time nonlinear systems, is dedicated to constructing a novel adaptive switching control strategy to circumvent the singular problem when the PPC undergoes input saturation, while the initial conditions of the system can be released under the framework of PPC. The main design steps and characteristics include: (1) By devising a new discrete-time global finite-time performance function (DTGFTPF), the constructed performance boundary is shown to survive insensitive to arbitrary initial values, which present in the system; (2) A discrete-time adaptive finite-time prescribed performance controller (DTAFPPC) and a discrete-time adaptive backstepping controller (DTABC) are constructed, simultaneously. The DTAFPPC possesses the capability to drive tracking error convergence within preset boundaries within a finite time. In the presence of input saturation, the DTABC is applied to prevent system instability while permitting tracking error to occasionally exceed performance bounds without compromising overall stability; and (3) To overcome non-causal problems inherent in backstepping designs, the current moment values of the errors are integrated into the controllers and the adaptive update laws. The stability of the closed-loop system is validated through Lyapunov analysis theory and simulations.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107025"},"PeriodicalIF":6.0000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.107025","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Most existing results on prescribed performance control (PPC), subject to input saturation and initial condition limitations, focus on continuous-time nonlinear systems. This article, as regards discrete-time nonlinear systems, is dedicated to constructing a novel adaptive switching control strategy to circumvent the singular problem when the PPC undergoes input saturation, while the initial conditions of the system can be released under the framework of PPC. The main design steps and characteristics include: (1) By devising a new discrete-time global finite-time performance function (DTGFTPF), the constructed performance boundary is shown to survive insensitive to arbitrary initial values, which present in the system; (2) A discrete-time adaptive finite-time prescribed performance controller (DTAFPPC) and a discrete-time adaptive backstepping controller (DTABC) are constructed, simultaneously. The DTAFPPC possesses the capability to drive tracking error convergence within preset boundaries within a finite time. In the presence of input saturation, the DTABC is applied to prevent system instability while permitting tracking error to occasionally exceed performance bounds without compromising overall stability; and (3) To overcome non-causal problems inherent in backstepping designs, the current moment values of the errors are integrated into the controllers and the adaptive update laws. The stability of the closed-loop system is validated through Lyapunov analysis theory and simulations.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.