{"title":"Optimal Control of State-Affine Dynamical Systems","authors":"A. Komaee","doi":"10.23919/ACC45564.2020.9147475","DOIUrl":null,"url":null,"abstract":"Optimal control of state-affine systems with finite or infinite dimensions is considered. The control performance is measured by a cost functional with state-affine Lagrangian and terminal cost. Relying upon such affine structure, a simple proof of Pontryagin’s maximum principle as a necessary condition for optimality is presented. This principle requires any optimal control to resolve a certain two-point boundary value problem. As the main contribution of this paper, an iterative algorithm is proposed that converges to the solution of this boundary value problem. This solution is regarded then as a candidate optimal control. Several applications are outlined for the optimal control problem of this paper, including: optimal control of unobserved stochastic systems (continuous-time Markov chain and diffusion process), convection-diffusion partial differential equations, and Lyapunov matrix differential equations.","PeriodicalId":288450,"journal":{"name":"2020 American Control Conference (ACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 American Control Conference (ACC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ACC45564.2020.9147475","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Optimal control of state-affine systems with finite or infinite dimensions is considered. The control performance is measured by a cost functional with state-affine Lagrangian and terminal cost. Relying upon such affine structure, a simple proof of Pontryagin’s maximum principle as a necessary condition for optimality is presented. This principle requires any optimal control to resolve a certain two-point boundary value problem. As the main contribution of this paper, an iterative algorithm is proposed that converges to the solution of this boundary value problem. This solution is regarded then as a candidate optimal control. Several applications are outlined for the optimal control problem of this paper, including: optimal control of unobserved stochastic systems (continuous-time Markov chain and diffusion process), convection-diffusion partial differential equations, and Lyapunov matrix differential equations.