A stabilizing reinforcement learning approach for sampled systems with partially unknown models

IF 3.2 3区 计算机科学 Q2 AUTOMATION & CONTROL SYSTEMS
Lukas Beckenbach, Pavel Osinenko, Stefan Streif
{"title":"A stabilizing reinforcement learning approach for sampled systems with partially unknown models","authors":"Lukas Beckenbach,&nbsp;Pavel Osinenko,&nbsp;Stefan Streif","doi":"10.1002/rnc.7626","DOIUrl":null,"url":null,"abstract":"<p>Reinforcement learning is commonly associated with training of reward-maximizing (or cost-minimizing) agents, in other words, controllers. It can be applied in model-free or model-based fashion, using a priori or online collected system data to train involved parametric architectures. In general, online reinforcement learning does not guarantee closed loop stability unless special measures are taken, for instance, through learning constraints or tailored training rules. Particularly promising are hybrids of reinforcement learning with classical control approaches. In this work, we suggest a method to guarantee practical stability of the system-controller closed loop in a purely online learning setting, in other words, without offline training. Moreover, we assume only partial knowledge of the system model. To achieve the claimed results, we employ techniques of classical adaptive control. The implementation of the overall control scheme is provided explicitly in a digital, sampled setting. That is, the controller receives the state of the system and computes the control action at discrete, specifically, equidistant moments in time. The method is tested in adaptive traction control and cruise control where it proved to significantly reduce the cost.</p>","PeriodicalId":50291,"journal":{"name":"International Journal of Robust and Nonlinear Control","volume":"34 18","pages":"12389-12412"},"PeriodicalIF":3.2000,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Robust and Nonlinear Control","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/rnc.7626","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Reinforcement learning is commonly associated with training of reward-maximizing (or cost-minimizing) agents, in other words, controllers. It can be applied in model-free or model-based fashion, using a priori or online collected system data to train involved parametric architectures. In general, online reinforcement learning does not guarantee closed loop stability unless special measures are taken, for instance, through learning constraints or tailored training rules. Particularly promising are hybrids of reinforcement learning with classical control approaches. In this work, we suggest a method to guarantee practical stability of the system-controller closed loop in a purely online learning setting, in other words, without offline training. Moreover, we assume only partial knowledge of the system model. To achieve the claimed results, we employ techniques of classical adaptive control. The implementation of the overall control scheme is provided explicitly in a digital, sampled setting. That is, the controller receives the state of the system and computes the control action at discrete, specifically, equidistant moments in time. The method is tested in adaptive traction control and cruise control where it proved to significantly reduce the cost.

针对具有部分未知模型的采样系统的稳定强化学习方法
强化学习通常与奖励最大化(或成本最小化)代理(即控制器)的训练有关。它可以采用无模型或基于模型的方式,利用先验或在线收集的系统数据来训练相关的参数架构。一般来说,在线强化学习不能保证闭环稳定性,除非采取特殊措施,例如通过学习约束或定制的训练规则。特别有前途的是强化学习与经典控制方法的混合。在这项工作中,我们提出了一种方法,可在纯在线学习环境下(换句话说,无需离线训练)保证系统控制器闭环的实际稳定性。此外,我们只假设对系统模型有部分了解。为了实现上述结果,我们采用了经典自适应控制技术。整体控制方案的实现是在数字采样环境中明确提供的。也就是说,控制器接收系统状态,并在离散的、具体的、时间上相等的时刻计算控制动作。该方法在自适应牵引力控制和巡航控制中进行了测试,证明能显著降低成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Robust and Nonlinear Control
International Journal of Robust and Nonlinear Control 工程技术-工程:电子与电气
CiteScore
6.70
自引率
20.50%
发文量
505
审稿时长
2.7 months
期刊介绍: Papers that do not include an element of robust or nonlinear control and estimation theory will not be considered by the journal, and all papers will be expected to include significant novel content. The focus of the journal is on model based control design approaches rather than heuristic or rule based methods. Papers on neural networks will have to be of exceptional novelty to be considered for the journal.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信