{"title":"A Model-Based Exploration Policy in Deep Q-Network","authors":"Shuailong Li, Wei Zhang, Yuquan Leng, Xin Zhang","doi":"10.1109/dsins54396.2021.9670573","DOIUrl":null,"url":null,"abstract":"Reinforcement learning has successfully been used in many applications and achieved prodigious performance (such as video games), and DQN is a well-known algorithm in RL. However, there are some disadvantages in practical applications, and the exploration and exploitation dilemma is one of them. To solve this problem, common strategies about exploration like ɛ–greedy have risen. Unfortunately, there are sample inefficient and ineffective because of the uncertainty of later exploration. In this paper, we propose a model-based exploration method that learns the state transition model to explore. Using the training rules of machine learning, we can train the state transition model networks to improve exploration efficiency and sample efficiency. We compare our algorithm with ɛ–greedy on the Deep Q-Networks (DQN) algorithm and apply it to the Atari 2600 games. Our algorithm outperforms the decaying ɛ–greedy strategy when we evaluate our algorithm across 14 Atari games in the Arcade Learning Environment (ALE).","PeriodicalId":243724,"journal":{"name":"2021 International Conference on Digital Society and Intelligent Systems (DSInS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Digital Society and Intelligent Systems (DSInS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/dsins54396.2021.9670573","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Reinforcement learning has successfully been used in many applications and achieved prodigious performance (such as video games), and DQN is a well-known algorithm in RL. However, there are some disadvantages in practical applications, and the exploration and exploitation dilemma is one of them. To solve this problem, common strategies about exploration like ɛ–greedy have risen. Unfortunately, there are sample inefficient and ineffective because of the uncertainty of later exploration. In this paper, we propose a model-based exploration method that learns the state transition model to explore. Using the training rules of machine learning, we can train the state transition model networks to improve exploration efficiency and sample efficiency. We compare our algorithm with ɛ–greedy on the Deep Q-Networks (DQN) algorithm and apply it to the Atari 2600 games. Our algorithm outperforms the decaying ɛ–greedy strategy when we evaluate our algorithm across 14 Atari games in the Arcade Learning Environment (ALE).