{"title":"Improved Exploration With Demonstrations in Procedurally-Generated Environments","authors":"Mao Xu;Shuzhi Sam Ge;Dongjie Zhao;Qian Zhao","doi":"10.1109/TG.2023.3299986","DOIUrl":null,"url":null,"abstract":"Exploring sparse reward environments remains a major challenge in model-free deep reinforcement learning (RL). State-of-the-art exploration methods address this challenge by utilizing intrinsic rewards to guide exploration in uncertain environment dynamics or novel states. However, these methods fall short in procedurally-generated environments, where the agent is unlikely to visit a state more than once due to the different environments generated in each episode. Recently, imitation-learning-based exploration methods have been proposed to guide exploration in different kinds of procedurally-generated environments by imitating high-quality exploration episodes. However, these methods have weaker exploration capabilities and lower sample efficiency in complex procedurally-generated environments. Motivated by the fact that demonstrations can guide exploration in sparse reward environments, we propose improved exploration with demonstrations (IEWD), an improved imitation-learning-based exploration method in procedurally-generated environments, which utilizes demonstrations from these environments. IEWD assigns different episode-level exploration scores to each demonstration episode and generated episode. IEWD then ranks these episodes based on their scores and stores highly-scored episodes into a small ranking buffer. IEWD treats these highly-scored episodes as good exploration episodes and makes the deep RL agent imitate exploration behaviors from the ranking buffer to reproduce exploration behaviors from good exploration episodes. Additionally, IEWD adopts the experience replay buffer to store generated positive episodes and demonstrations and employs self-imitating learning to utilize experiences from the experience replay buffer to optimize the policy of the deep RL agent. We evaluate our method IEWD on several procedurally-generated MiniGrid environments and 3-D maze environments from MiniWorld. The results show that IEWD significantly outperforms existing learning from demonstration methods and exploration methods, including state-of-the-art imitation-learning-based exploration methods, in terms of sample efficiency and final performance in complex procedurally-generated environments.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 3","pages":"530-543"},"PeriodicalIF":1.7000,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Games","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10197470/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Exploring sparse reward environments remains a major challenge in model-free deep reinforcement learning (RL). State-of-the-art exploration methods address this challenge by utilizing intrinsic rewards to guide exploration in uncertain environment dynamics or novel states. However, these methods fall short in procedurally-generated environments, where the agent is unlikely to visit a state more than once due to the different environments generated in each episode. Recently, imitation-learning-based exploration methods have been proposed to guide exploration in different kinds of procedurally-generated environments by imitating high-quality exploration episodes. However, these methods have weaker exploration capabilities and lower sample efficiency in complex procedurally-generated environments. Motivated by the fact that demonstrations can guide exploration in sparse reward environments, we propose improved exploration with demonstrations (IEWD), an improved imitation-learning-based exploration method in procedurally-generated environments, which utilizes demonstrations from these environments. IEWD assigns different episode-level exploration scores to each demonstration episode and generated episode. IEWD then ranks these episodes based on their scores and stores highly-scored episodes into a small ranking buffer. IEWD treats these highly-scored episodes as good exploration episodes and makes the deep RL agent imitate exploration behaviors from the ranking buffer to reproduce exploration behaviors from good exploration episodes. Additionally, IEWD adopts the experience replay buffer to store generated positive episodes and demonstrations and employs self-imitating learning to utilize experiences from the experience replay buffer to optimize the policy of the deep RL agent. We evaluate our method IEWD on several procedurally-generated MiniGrid environments and 3-D maze environments from MiniWorld. The results show that IEWD significantly outperforms existing learning from demonstration methods and exploration methods, including state-of-the-art imitation-learning-based exploration methods, in terms of sample efficiency and final performance in complex procedurally-generated environments.