{"title":"A discontinuous recurrent neural network with predefined time convergence for solution of linear programming","authors":"J. Sánchez‐Torres, E. Sánchez, A. Loukianov","doi":"10.1109/SIS.2014.7011799","DOIUrl":null,"url":null,"abstract":"The aim of this paper is to introduce a new recurrent neural network to solve linear programming. The main characteristic of the proposed scheme is its design based on the predefined-time stability. The predefined-time stability is a stronger form of finite-time stability which allows the a priori definition of a convergence time that does not depend on the network initial state. The network structure is based on the Karush-Kuhn-Tucker (KKT) conditions and the KKT multipliers are proposed as sliding mode control inputs. This selection yields to an one-layer recurrent neural network in which the only parameter to be tuned is the desired convergence time. With this features, the network can be easily scaled from a small to a higher dimension problem. The simulation of a simple example shows the feasibility of the current approach.","PeriodicalId":380286,"journal":{"name":"2014 IEEE Symposium on Swarm Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"58","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Symposium on Swarm Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIS.2014.7011799","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 58
Abstract
The aim of this paper is to introduce a new recurrent neural network to solve linear programming. The main characteristic of the proposed scheme is its design based on the predefined-time stability. The predefined-time stability is a stronger form of finite-time stability which allows the a priori definition of a convergence time that does not depend on the network initial state. The network structure is based on the Karush-Kuhn-Tucker (KKT) conditions and the KKT multipliers are proposed as sliding mode control inputs. This selection yields to an one-layer recurrent neural network in which the only parameter to be tuned is the desired convergence time. With this features, the network can be easily scaled from a small to a higher dimension problem. The simulation of a simple example shows the feasibility of the current approach.