{"title":"Addressing maximization bias in reinforcement learning with two-sample testing","authors":"Martin Waltz , Ostap Okhrin","doi":"10.1016/j.artint.2024.104204","DOIUrl":null,"url":null,"abstract":"<div><p>Value-based reinforcement-learning algorithms have shown strong results in games, robotics, and other real-world applications. Overestimation bias is a known threat to those algorithms and can sometimes lead to dramatic performance decreases or even complete algorithmic failure. We frame the bias problem statistically and consider it an instance of estimating the maximum expected value (MEV) of a set of random variables. We propose the <em>T</em>-Estimator (TE) based on two-sample testing for the mean, that flexibly interpolates between over- and underestimation by adjusting the significance level of the underlying hypothesis tests. We also introduce a generalization, termed <em>K</em>-Estimator (KE), that obeys the same bias and variance bounds as the TE and relies on a nearly arbitrary kernel function. We introduce modifications of <em>Q</em>-Learning and the Bootstrapped Deep <em>Q</em>-Network (BDQN) using the TE and the KE, and prove convergence in the tabular setting. Furthermore, we propose an adaptive variant of the TE-based BDQN that dynamically adjusts the significance level to minimize the absolute estimation bias. All proposed estimators and algorithms are thoroughly tested and validated on diverse tasks and environments, illustrating the bias control and performance potential of the TE and KE.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104204"},"PeriodicalIF":5.1000,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001401/pdfft?md5=5b6841aff0d8d49b8cc40332377d2f38&pid=1-s2.0-S0004370224001401-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0004370224001401","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Value-based reinforcement-learning algorithms have shown strong results in games, robotics, and other real-world applications. Overestimation bias is a known threat to those algorithms and can sometimes lead to dramatic performance decreases or even complete algorithmic failure. We frame the bias problem statistically and consider it an instance of estimating the maximum expected value (MEV) of a set of random variables. We propose the T-Estimator (TE) based on two-sample testing for the mean, that flexibly interpolates between over- and underestimation by adjusting the significance level of the underlying hypothesis tests. We also introduce a generalization, termed K-Estimator (KE), that obeys the same bias and variance bounds as the TE and relies on a nearly arbitrary kernel function. We introduce modifications of Q-Learning and the Bootstrapped Deep Q-Network (BDQN) using the TE and the KE, and prove convergence in the tabular setting. Furthermore, we propose an adaptive variant of the TE-based BDQN that dynamically adjusts the significance level to minimize the absolute estimation bias. All proposed estimators and algorithms are thoroughly tested and validated on diverse tasks and environments, illustrating the bias control and performance potential of the TE and KE.
基于价值的强化学习算法在游戏、机器人和其他实际应用中都取得了很好的效果。高估偏差是这些算法面临的一个已知威胁,有时会导致性能急剧下降甚至算法完全失效。我们从统计学的角度来看待偏差问题,并将其视为估计一组随机变量的最大期望值 (MEV) 的一个实例。我们提出了基于均值双样本检验的 T-估计器(TE),它可以通过调整基本假设检验的显著性水平,在高估和低估之间灵活插值。我们还引入了一种概括,称为 K-估计器(KE),它与 TE 遵循相同的偏差和方差约束,并依赖于近乎任意的核函数。我们介绍了使用 TE 和 KE 对 Q-Learning 和 Bootstrapped Deep Q-Network (BDQN) 的修改,并证明了在表格设置中的收敛性。此外,我们还提出了基于 TE 的 BDQN 的自适应变体,该变体可动态调整显著性水平,使绝对估计偏差最小化。所有提出的估计器和算法都在不同的任务和环境中进行了全面的测试和验证,说明了 TE 和 KE 的偏差控制和性能潜力。
期刊介绍:
The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.