{"title":"Direct and indirect methods for learning optimal control laws","authors":"S. Atkins, W. Baker","doi":"10.1109/ICNN.1994.374644","DOIUrl":null,"url":null,"abstract":"The primary focus of this paper is to discuss two general approaches for incrementally synthesizing a nonlinear optimal control law, through real-time, closed-loop interactions between the dynamic system, its environment, and a learning control system, when substantial initial model uncertainty exists. Learning systems represent an on-line approach to the incremental synthesis of an optimal control law for situations where initial model uncertainty precludes the use of robust, fixed control laws, and where significant dynamic nonlinearities reduce the level of performance attainable by adaptive control laws. In parallel with the established framework of direct and indirect adaptive control algorithms, a direct/indirect framework is proposed as a means of classifying approaches to learning optimal control laws. Direct learning optimal control implies that the feedback loop which motivates the learning process is closed around system performance. Common properties of direct learning algorithms, including the apparent necessity of approximating two complementary functions, are reviewed. Indirect learning optimal control denotes a class of incremental control law synthesis methods for which the learning loop is closed around the system model. This class is illustrated by developing a simple optimal control law.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"289 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNN.1994.374644","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The primary focus of this paper is to discuss two general approaches for incrementally synthesizing a nonlinear optimal control law, through real-time, closed-loop interactions between the dynamic system, its environment, and a learning control system, when substantial initial model uncertainty exists. Learning systems represent an on-line approach to the incremental synthesis of an optimal control law for situations where initial model uncertainty precludes the use of robust, fixed control laws, and where significant dynamic nonlinearities reduce the level of performance attainable by adaptive control laws. In parallel with the established framework of direct and indirect adaptive control algorithms, a direct/indirect framework is proposed as a means of classifying approaches to learning optimal control laws. Direct learning optimal control implies that the feedback loop which motivates the learning process is closed around system performance. Common properties of direct learning algorithms, including the apparent necessity of approximating two complementary functions, are reviewed. Indirect learning optimal control denotes a class of incremental control law synthesis methods for which the learning loop is closed around the system model. This class is illustrated by developing a simple optimal control law.<>