State Aggregation by Growing Neural Gas for Reinforcement Learning in Continuous State Spaces

Michael Baumann, H. K. Büning
{"title":"State Aggregation by Growing Neural Gas for Reinforcement Learning in Continuous State Spaces","authors":"Michael Baumann, H. K. Büning","doi":"10.1109/ICMLA.2011.134","DOIUrl":null,"url":null,"abstract":"One of the conditions for the convergence of Q-Learning is to visit each state-action pair infinitely (or at least sufficiently) often. This requirement raises problems for large or continuous state spaces. Particularly, in continuous state spaces a discretization sufficiently fine to cover all relevant information usually results in an extremely large state space. In order to speed up and improve learning it is highly beneficial to add generalization to Q-Learning and thus being able to exploit experiences gained earlier. To achieve this, we compute a state space abstraction with a combination of growing neural gas and Q-Learning. This abstraction respects similarity in the state and action space and is constructed based on information achieved from interaction with the environment during learning. We examine the proposed algorithm on a continuous-state reinforcement learning problem and show that the approximated state space and the generalization speed up learning.","PeriodicalId":439926,"journal":{"name":"2011 10th International Conference on Machine Learning and Applications and Workshops","volume":"116 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 10th International Conference on Machine Learning and Applications and Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2011.134","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

One of the conditions for the convergence of Q-Learning is to visit each state-action pair infinitely (or at least sufficiently) often. This requirement raises problems for large or continuous state spaces. Particularly, in continuous state spaces a discretization sufficiently fine to cover all relevant information usually results in an extremely large state space. In order to speed up and improve learning it is highly beneficial to add generalization to Q-Learning and thus being able to exploit experiences gained earlier. To achieve this, we compute a state space abstraction with a combination of growing neural gas and Q-Learning. This abstraction respects similarity in the state and action space and is constructed based on information achieved from interaction with the environment during learning. We examine the proposed algorithm on a continuous-state reinforcement learning problem and show that the approximated state space and the generalization speed up learning.
连续状态空间中用于强化学习的生长神经气体状态聚合
Q-Learning收敛的条件之一是无限(或至少足够)频繁地访问每个状态-动作对。这一要求为大型或连续状态空间带来了问题。特别是,在连续状态空间中,足够精细的离散化以覆盖所有相关信息通常会导致一个非常大的状态空间。为了加速和改进学习,将泛化添加到Q-Learning中是非常有益的,从而能够利用先前获得的经验。为了实现这一点,我们计算了一个状态空间抽象,结合了生长神经气体和Q-Learning。这种抽象尊重状态和动作空间的相似性,并基于学习过程中与环境交互获得的信息构建。我们在一个连续状态强化学习问题上检验了所提出的算法,并证明了近似状态空间和泛化加快了学习速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信