{"title":"离线强化学习的隐性策略约束","authors":"Zhiyong Peng, Yadong Liu, Changlin Han, Zongtan Zhou","doi":"10.1049/cit2.12304","DOIUrl":null,"url":null,"abstract":"<p>Offline reinforcement learning (RL) aims to learn policies entirely from passively collected datasets, making it a data-driven decision method. One of the main challenges in offline RL is the distribution shift problem, which causes the algorithm to visit out-of-distribution (OOD) samples. The distribution shift can be mitigated by constraining the divergence between the target policy and the behaviour policy. However, this method can overly constrain the target policy and impair the algorithm's performance, as it does not directly distinguish between in-distribution and OOD samples. In addition, it is difficult to learn and represent multi-modal behaviour policy when the datasets are collected by several different behaviour policies. To overcome these drawbacks, the authors address the distribution shift problem by implicit policy constraints with energy-based models (EBMs) rather than explicitly modelling the behaviour policy. The EBM is powerful for representing complex multi-modal distributions as well as the ability to distinguish in-distribution samples and OODs. Experimental results show that their method significantly outperforms the explicit policy constraint method and other baselines. In addition, the learnt energy model can be used to indicate OOD visits and alert the possible failure.</p>","PeriodicalId":46211,"journal":{"name":"CAAI Transactions on Intelligence Technology","volume":"9 4","pages":"973-981"},"PeriodicalIF":8.4000,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cit2.12304","citationCount":"0","resultStr":"{\"title\":\"Implicit policy constraint for offline reinforcement learning\",\"authors\":\"Zhiyong Peng, Yadong Liu, Changlin Han, Zongtan Zhou\",\"doi\":\"10.1049/cit2.12304\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Offline reinforcement learning (RL) aims to learn policies entirely from passively collected datasets, making it a data-driven decision method. One of the main challenges in offline RL is the distribution shift problem, which causes the algorithm to visit out-of-distribution (OOD) samples. The distribution shift can be mitigated by constraining the divergence between the target policy and the behaviour policy. However, this method can overly constrain the target policy and impair the algorithm's performance, as it does not directly distinguish between in-distribution and OOD samples. In addition, it is difficult to learn and represent multi-modal behaviour policy when the datasets are collected by several different behaviour policies. To overcome these drawbacks, the authors address the distribution shift problem by implicit policy constraints with energy-based models (EBMs) rather than explicitly modelling the behaviour policy. The EBM is powerful for representing complex multi-modal distributions as well as the ability to distinguish in-distribution samples and OODs. Experimental results show that their method significantly outperforms the explicit policy constraint method and other baselines. In addition, the learnt energy model can be used to indicate OOD visits and alert the possible failure.</p>\",\"PeriodicalId\":46211,\"journal\":{\"name\":\"CAAI Transactions on Intelligence Technology\",\"volume\":\"9 4\",\"pages\":\"973-981\"},\"PeriodicalIF\":8.4000,\"publicationDate\":\"2024-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cit2.12304\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"CAAI Transactions on Intelligence Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/cit2.12304\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"CAAI Transactions on Intelligence Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cit2.12304","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Implicit policy constraint for offline reinforcement learning
Offline reinforcement learning (RL) aims to learn policies entirely from passively collected datasets, making it a data-driven decision method. One of the main challenges in offline RL is the distribution shift problem, which causes the algorithm to visit out-of-distribution (OOD) samples. The distribution shift can be mitigated by constraining the divergence between the target policy and the behaviour policy. However, this method can overly constrain the target policy and impair the algorithm's performance, as it does not directly distinguish between in-distribution and OOD samples. In addition, it is difficult to learn and represent multi-modal behaviour policy when the datasets are collected by several different behaviour policies. To overcome these drawbacks, the authors address the distribution shift problem by implicit policy constraints with energy-based models (EBMs) rather than explicitly modelling the behaviour policy. The EBM is powerful for representing complex multi-modal distributions as well as the ability to distinguish in-distribution samples and OODs. Experimental results show that their method significantly outperforms the explicit policy constraint method and other baselines. In addition, the learnt energy model can be used to indicate OOD visits and alert the possible failure.
期刊介绍:
CAAI Transactions on Intelligence Technology is a leading venue for original research on the theoretical and experimental aspects of artificial intelligence technology. We are a fully open access journal co-published by the Institution of Engineering and Technology (IET) and the Chinese Association for Artificial Intelligence (CAAI) providing research which is openly accessible to read and share worldwide.