{"title":"基于时间差分动态逻辑的可靠强化学习","authors":"Runhao Wang, Yuhong Zhang, Haiying Sun, Jing Liu","doi":"10.1109/ISCC53001.2021.9631442","DOIUrl":null,"url":null,"abstract":"Reinforcement learning algorithms discover policies that are lauded for their high efficiency, but don't necessarily guarantee safety. We introduce a new approach that provides the best of both worlds: learning optimal policies while enforcing the system to comply with certain model to keep the learning dependable. To this end, we propose Timed Differential Dynamic Logic to express the system properties. Our main insight is to convert the properties to runtime monitors, and use them to monitor whether the system is correctly modeled. We choose the optimal polices only if the reality matches the model, or we will abandon efficiency and instead to choose a policy that guides the agent to a modeled portion of the state space. We also propose Dependable Mixed Control (DMC) algorithm to implement a framework for application. Finally, the effectiveness of our approach is validated through a case study on Communication-Based Autonomous Control (CBAC).","PeriodicalId":270786,"journal":{"name":"2021 IEEE Symposium on Computers and Communications (ISCC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dependable Reinforcement Learning via Timed Differential Dynamic Logic\",\"authors\":\"Runhao Wang, Yuhong Zhang, Haiying Sun, Jing Liu\",\"doi\":\"10.1109/ISCC53001.2021.9631442\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement learning algorithms discover policies that are lauded for their high efficiency, but don't necessarily guarantee safety. We introduce a new approach that provides the best of both worlds: learning optimal policies while enforcing the system to comply with certain model to keep the learning dependable. To this end, we propose Timed Differential Dynamic Logic to express the system properties. Our main insight is to convert the properties to runtime monitors, and use them to monitor whether the system is correctly modeled. We choose the optimal polices only if the reality matches the model, or we will abandon efficiency and instead to choose a policy that guides the agent to a modeled portion of the state space. We also propose Dependable Mixed Control (DMC) algorithm to implement a framework for application. Finally, the effectiveness of our approach is validated through a case study on Communication-Based Autonomous Control (CBAC).\",\"PeriodicalId\":270786,\"journal\":{\"name\":\"2021 IEEE Symposium on Computers and Communications (ISCC)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Symposium on Computers and Communications (ISCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCC53001.2021.9631442\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium on Computers and Communications (ISCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCC53001.2021.9631442","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dependable Reinforcement Learning via Timed Differential Dynamic Logic
Reinforcement learning algorithms discover policies that are lauded for their high efficiency, but don't necessarily guarantee safety. We introduce a new approach that provides the best of both worlds: learning optimal policies while enforcing the system to comply with certain model to keep the learning dependable. To this end, we propose Timed Differential Dynamic Logic to express the system properties. Our main insight is to convert the properties to runtime monitors, and use them to monitor whether the system is correctly modeled. We choose the optimal polices only if the reality matches the model, or we will abandon efficiency and instead to choose a policy that guides the agent to a modeled portion of the state space. We also propose Dependable Mixed Control (DMC) algorithm to implement a framework for application. Finally, the effectiveness of our approach is validated through a case study on Communication-Based Autonomous Control (CBAC).