{"title":"不确定条件下基于可靠性的强化学习","authors":"Zequn Wang, Narendra Patwardhan","doi":"10.1115/detc2020-22019","DOIUrl":null,"url":null,"abstract":"\n Despite the numerous advances, reinforcement learning remains away from widespread acceptance for autonomous controller design as compared to classical methods due to lack of ability to effectively tackle uncertainty. The reliance on absolute or deterministic reward as a metric for optimization process renders reinforcement learning highly susceptible to changes in problem dynamics. We introduce a novel framework that effectively quantify the uncertainty in the design space and induces robustness in controllers by switching to a reliability-based optimization routine. A model-based approach is used to improve the data efficiency of the method while predicting the system dynamics. We prove the stability of learned neuro-controllers in both static and dynamic environments on classical reinforcement learning tasks such as Cart Pole balancing and Inverted Pendulum.","PeriodicalId":415040,"journal":{"name":"Volume 11A: 46th Design Automation Conference (DAC)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reliability-Based Reinforcement Learning Under Uncertainty\",\"authors\":\"Zequn Wang, Narendra Patwardhan\",\"doi\":\"10.1115/detc2020-22019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Despite the numerous advances, reinforcement learning remains away from widespread acceptance for autonomous controller design as compared to classical methods due to lack of ability to effectively tackle uncertainty. The reliance on absolute or deterministic reward as a metric for optimization process renders reinforcement learning highly susceptible to changes in problem dynamics. We introduce a novel framework that effectively quantify the uncertainty in the design space and induces robustness in controllers by switching to a reliability-based optimization routine. A model-based approach is used to improve the data efficiency of the method while predicting the system dynamics. We prove the stability of learned neuro-controllers in both static and dynamic environments on classical reinforcement learning tasks such as Cart Pole balancing and Inverted Pendulum.\",\"PeriodicalId\":415040,\"journal\":{\"name\":\"Volume 11A: 46th Design Automation Conference (DAC)\",\"volume\":\"115 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Volume 11A: 46th Design Automation Conference (DAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1115/detc2020-22019\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 11A: 46th Design Automation Conference (DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2020-22019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Reliability-Based Reinforcement Learning Under Uncertainty
Despite the numerous advances, reinforcement learning remains away from widespread acceptance for autonomous controller design as compared to classical methods due to lack of ability to effectively tackle uncertainty. The reliance on absolute or deterministic reward as a metric for optimization process renders reinforcement learning highly susceptible to changes in problem dynamics. We introduce a novel framework that effectively quantify the uncertainty in the design space and induces robustness in controllers by switching to a reliability-based optimization routine. A model-based approach is used to improve the data efficiency of the method while predicting the system dynamics. We prove the stability of learned neuro-controllers in both static and dynamic environments on classical reinforcement learning tasks such as Cart Pole balancing and Inverted Pendulum.