{"title":"Design and application of deep reinforcement learning algorithms based on unbiased exploration strategies for value functions","authors":"Pingli Lv","doi":"10.1016/j.measen.2024.101241","DOIUrl":null,"url":null,"abstract":"<div><p>Deep Q-networks, as a representation of several classical techniques, have emerged as one of the primary branches in the field of value function-based reinforcement learning. The paper addresses two issues that come up in the realm of reinforcement learning for value function solving: estimating bias and maximizing projected action value function evaluation. By treating the estimation of the highest expected action value as a random selection estimation problem, the suggested approach addresses the estimation bias issue from the standpoint of random selection. A random choice estimate procedure forms the basis of the technique. Firstly, a proposed random choice estimator is presented and its theoretical fairness is established. Second, the estimator is applied to create a reinforcement learning method in a different application. Two techniques, namely stochastic two-depth Q-networks and double-Q learning, are suggested based on the random choice estimation technique. The main parameters of the suggested algorithms are then investigated, and parameter formulas for both predictable and unpredictable scenarios are created. Lastly, a random choice estimation perspective suggests a stochastic two-depth Q-network. The new approach may effectively remove bias in value function estimate, enhance learning performance, and stabilise the learning process, according to simulation findings on Grid World and Atari games.</p></div>","PeriodicalId":34311,"journal":{"name":"Measurement Sensors","volume":"34 ","pages":"Article 101241"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665917424002174/pdfft?md5=3c8debe2060b83588fd89abff0020cfb&pid=1-s2.0-S2665917424002174-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Measurement Sensors","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2665917424002174","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Q-networks, as a representation of several classical techniques, have emerged as one of the primary branches in the field of value function-based reinforcement learning. The paper addresses two issues that come up in the realm of reinforcement learning for value function solving: estimating bias and maximizing projected action value function evaluation. By treating the estimation of the highest expected action value as a random selection estimation problem, the suggested approach addresses the estimation bias issue from the standpoint of random selection. A random choice estimate procedure forms the basis of the technique. Firstly, a proposed random choice estimator is presented and its theoretical fairness is established. Second, the estimator is applied to create a reinforcement learning method in a different application. Two techniques, namely stochastic two-depth Q-networks and double-Q learning, are suggested based on the random choice estimation technique. The main parameters of the suggested algorithms are then investigated, and parameter formulas for both predictable and unpredictable scenarios are created. Lastly, a random choice estimation perspective suggests a stochastic two-depth Q-network. The new approach may effectively remove bias in value function estimate, enhance learning performance, and stabilise the learning process, according to simulation findings on Grid World and Atari games.