{"title":"APSN: adaptive prediction sample network in Deep Q learning","authors":"Shijie Chu","doi":"10.1117/12.3031933","DOIUrl":null,"url":null,"abstract":"Deep Q learning is a crucial method of deep reinforcement learning and has achieved remarkable success in multiple applications. However, Deep Q-learning suffers from low sample efficiency. To overcome this limitation, we introduce a novel algorithm, adaptive prediction sample network (APSN), to improve the sample efficiency. APSN is designed to predict the importance of each sample to policy updates, enabling efficient sample selection. We introduce a new metric to evaluate the importance of samples and use it to train the APSN network. In the experimental parts, we evaluate our algorithm on four Atari games in OpenAI Gym and compare APSN with SDQN. Experimental results show that APSN performs better in terms of sample efficiency and provides an effective solution for improving the sample efficiency of deep reinforcement learning. This research result is expected to promote the performance of deep reinforcement learning algorithms in practical applications.","PeriodicalId":342847,"journal":{"name":"International Conference on Algorithms, Microchips and Network Applications","volume":" 26","pages":"131711V - 131711V-5"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Algorithms, Microchips and Network Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3031933","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Q learning is a crucial method of deep reinforcement learning and has achieved remarkable success in multiple applications. However, Deep Q-learning suffers from low sample efficiency. To overcome this limitation, we introduce a novel algorithm, adaptive prediction sample network (APSN), to improve the sample efficiency. APSN is designed to predict the importance of each sample to policy updates, enabling efficient sample selection. We introduce a new metric to evaluate the importance of samples and use it to train the APSN network. In the experimental parts, we evaluate our algorithm on four Atari games in OpenAI Gym and compare APSN with SDQN. Experimental results show that APSN performs better in terms of sample efficiency and provides an effective solution for improving the sample efficiency of deep reinforcement learning. This research result is expected to promote the performance of deep reinforcement learning algorithms in practical applications.