输出反馈强化q学习在未知离散线性系统最优二次跟踪控制中的应用

Guangyue Zhao, Weijie Sun, He Cai, Yunjian Peng
{"title":"输出反馈强化q学习在未知离散线性系统最优二次跟踪控制中的应用","authors":"Guangyue Zhao, Weijie Sun, He Cai, Yunjian Peng","doi":"10.1109/ICARCV.2018.8581252","DOIUrl":null,"url":null,"abstract":"In this paper, a novel output feedback solution based on the Q-learning algorithm using the measured data is proposed for the linear quadratic tracking (LQT) problem of unknown discrete-time systems. To tackle this technical issue, an augmented system composed of the original controlled system and the linear command generator is first constructed. Then, by using the past input, output, and reference trajectory data of the augmented system, the output feedback Q-learning scheme is able to learn the optimal tracking controller online without requiring any knowledge of the augmented system dynamics. Learning algorithms including both policy iteration (PI) and value iteration (VI) algorithms are developed to converge to the optimal solution. Finally, simulation results are provided to verify the effectiveness of the proposed scheme.","PeriodicalId":395380,"journal":{"name":"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Output Feedback Reinforcement Q-learning for Optimal Quadratic Tracking Control of Unknown Discrete-Time Linear Systems and Its Application\",\"authors\":\"Guangyue Zhao, Weijie Sun, He Cai, Yunjian Peng\",\"doi\":\"10.1109/ICARCV.2018.8581252\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a novel output feedback solution based on the Q-learning algorithm using the measured data is proposed for the linear quadratic tracking (LQT) problem of unknown discrete-time systems. To tackle this technical issue, an augmented system composed of the original controlled system and the linear command generator is first constructed. Then, by using the past input, output, and reference trajectory data of the augmented system, the output feedback Q-learning scheme is able to learn the optimal tracking controller online without requiring any knowledge of the augmented system dynamics. Learning algorithms including both policy iteration (PI) and value iteration (VI) algorithms are developed to converge to the optimal solution. Finally, simulation results are provided to verify the effectiveness of the proposed scheme.\",\"PeriodicalId\":395380,\"journal\":{\"name\":\"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)\",\"volume\":\"89 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICARCV.2018.8581252\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARCV.2018.8581252","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

针对未知离散系统的线性二次跟踪(LQT)问题,提出了一种基于q -学习算法的输出反馈方法。为了解决这一技术问题,首先构造了一个由原被控系统和线性命令发生器组成的增强系统。然后,利用增强系统过去的输入、输出和参考轨迹数据,输出反馈q -学习方案能够在线学习最优跟踪控制器,而不需要任何增强系统动力学知识。为了收敛到最优解,提出了策略迭代(PI)和值迭代(VI)两种学习算法。最后给出了仿真结果,验证了所提方案的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Output Feedback Reinforcement Q-learning for Optimal Quadratic Tracking Control of Unknown Discrete-Time Linear Systems and Its Application
In this paper, a novel output feedback solution based on the Q-learning algorithm using the measured data is proposed for the linear quadratic tracking (LQT) problem of unknown discrete-time systems. To tackle this technical issue, an augmented system composed of the original controlled system and the linear command generator is first constructed. Then, by using the past input, output, and reference trajectory data of the augmented system, the output feedback Q-learning scheme is able to learn the optimal tracking controller online without requiring any knowledge of the augmented system dynamics. Learning algorithms including both policy iteration (PI) and value iteration (VI) algorithms are developed to converge to the optimal solution. Finally, simulation results are provided to verify the effectiveness of the proposed scheme.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信