Toward Improving the Distributional Robustness of Risk-Aware Controllers in Learning-Enabled Environments

A. Hakobyan, Insoon Yang
{"title":"Toward Improving the Distributional Robustness of Risk-Aware Controllers in Learning-Enabled Environments","authors":"A. Hakobyan, Insoon Yang","doi":"10.1109/CDC45484.2021.9682981","DOIUrl":null,"url":null,"abstract":"This paper is concerned with designing a risk-aware controller in an unknown and dynamic environment. In our method, the evolution of the environment state is learned using observational data via Gaussian process regression (GPR). Unfortunately, these learning results provide imperfect distribution information about the environment. To address such distribution errors, we propose a risk-constrained model predictive control (MPC) method that exploits techniques from modern distributionally robust optimization (DRO). To resolve the infinite dimensionality issue inherent in DRO, we derive a tractable semidefinite programming (SDP) problem that upper-bounds the original MPC problem. Furthermore, the SDP problem is reduced to a quadratic program when the constraint function has a decomposable form. The performance and the utility of our method are demonstrated through an autonomous driving problem, and the results show that our controller preserves safety despite errors in learning the behaviors of surrounding vehicles.","PeriodicalId":229089,"journal":{"name":"2021 60th IEEE Conference on Decision and Control (CDC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 60th IEEE Conference on Decision and Control (CDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CDC45484.2021.9682981","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

This paper is concerned with designing a risk-aware controller in an unknown and dynamic environment. In our method, the evolution of the environment state is learned using observational data via Gaussian process regression (GPR). Unfortunately, these learning results provide imperfect distribution information about the environment. To address such distribution errors, we propose a risk-constrained model predictive control (MPC) method that exploits techniques from modern distributionally robust optimization (DRO). To resolve the infinite dimensionality issue inherent in DRO, we derive a tractable semidefinite programming (SDP) problem that upper-bounds the original MPC problem. Furthermore, the SDP problem is reduced to a quadratic program when the constraint function has a decomposable form. The performance and the utility of our method are demonstrated through an autonomous driving problem, and the results show that our controller preserves safety despite errors in learning the behaviors of surrounding vehicles.
改进学习环境中风险感知控制器的分布鲁棒性
本文研究了未知动态环境下的风险感知控制器设计问题。在我们的方法中,通过高斯过程回归(GPR)利用观测数据学习环境状态的演变。不幸的是,这些学习结果提供了关于环境的不完美分布信息。为了解决这种分布误差,我们提出了一种利用现代分布鲁棒优化(DRO)技术的风险约束模型预测控制(MPC)方法。为了解决DRO固有的无限维问题,我们导出了一个可处理的半定规划问题(SDP),它是原MPC问题的上界。进一步地,当约束函数具有可分解形式时,将SDP问题简化为二次规划问题。通过一个自动驾驶问题证明了我们的方法的性能和实用性,结果表明,尽管在学习周围车辆的行为时存在错误,但我们的控制器仍然保持了安全性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信