Hyung-Jin Yoon, Chuyuan Tao, Hunmin Kim, N. Hovakimyan, P. Voulgaris
{"title":"Adaptive Risk Sensitive Path Integral for Model Predictive Control via Reinforcement Learning","authors":"Hyung-Jin Yoon, Chuyuan Tao, Hunmin Kim, N. Hovakimyan, P. Voulgaris","doi":"10.1109/MED59994.2023.10185876","DOIUrl":null,"url":null,"abstract":"We propose a reinforcement learning framework where an agent uses an internal nominal model for stochastic model predictive control (MPC) while compensating for a disturbance. Our work builds on the existing risk-aware optimal control with stochastic differential equations (SDEs) that aims to deal with such disturbance. However, the risk sensitivity and the noise strength of the nominal SDE in the riskaware optimal control are often heuristically chosen. In the proposed framework, the risk-taking policy determines the behavior of the MPC to be risk-seeking (exploration) or risk-averse (exploitation). Specifically, we employ the risk-aware path integral control that can be implemented as a Monte-Carlo (MC) sampling with fast parallel simulations using a GPU. The MC sampling implementations of the MPC have been successful in robotic applications due to their real-time computation capability. The proposed framework that adapts the noise model and the risk sensitivity outperforms the standard model predictive path integral in simulation environments that have disturbances.","PeriodicalId":270226,"journal":{"name":"2023 31st Mediterranean Conference on Control and Automation (MED)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 31st Mediterranean Conference on Control and Automation (MED)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MED59994.2023.10185876","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose a reinforcement learning framework where an agent uses an internal nominal model for stochastic model predictive control (MPC) while compensating for a disturbance. Our work builds on the existing risk-aware optimal control with stochastic differential equations (SDEs) that aims to deal with such disturbance. However, the risk sensitivity and the noise strength of the nominal SDE in the riskaware optimal control are often heuristically chosen. In the proposed framework, the risk-taking policy determines the behavior of the MPC to be risk-seeking (exploration) or risk-averse (exploitation). Specifically, we employ the risk-aware path integral control that can be implemented as a Monte-Carlo (MC) sampling with fast parallel simulations using a GPU. The MC sampling implementations of the MPC have been successful in robotic applications due to their real-time computation capability. The proposed framework that adapts the noise model and the risk sensitivity outperforms the standard model predictive path integral in simulation environments that have disturbances.