{"title":"Experimental analysis of eligibility traces strategies in temporal difference learning","authors":"Jinsong Leng, L. Jain, C. Fyfe","doi":"10.1504/IJKESDP.2009.021982","DOIUrl":null,"url":null,"abstract":"Temporal difference (TD) learning is a model-free reinforcement learning technique, which adopts an infinite horizon discount model and uses an incremental learning technique for dynamic programming. The state value function is updated in terms of sample episodes. Utilising eligibility traces is a key mechanism in enhancing the rate of convergence. TD(λ) represents the use of eligibility traces by introducing the parameter λ. However, the underlying mechanism of eligibility traces with an approximation function has not been well understood, either from theoretical point of view or from practical point of view. The TD(λ) method has been proved to be convergent with local tabular state representation. Unfortunately, proving convergence of TD(λ) with function approximation is still an important open theoretical question. This paper aims to investigate the convergence and the effects of different eligibility traces. In this paper, we adopt Sarsa(λ) learning control algorithm with a large, stochastic and dynamic simulation environment called SoccerBots. The state value function is represented by a linear approximation function known as tile coding. The performance metrics generated from the simulation system can be used to analyse the mechanism of eligibility traces.","PeriodicalId":347123,"journal":{"name":"Int. J. Knowl. Eng. Soft Data Paradigms","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Knowl. Eng. Soft Data Paradigms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1504/IJKESDP.2009.021982","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Temporal difference (TD) learning is a model-free reinforcement learning technique, which adopts an infinite horizon discount model and uses an incremental learning technique for dynamic programming. The state value function is updated in terms of sample episodes. Utilising eligibility traces is a key mechanism in enhancing the rate of convergence. TD(λ) represents the use of eligibility traces by introducing the parameter λ. However, the underlying mechanism of eligibility traces with an approximation function has not been well understood, either from theoretical point of view or from practical point of view. The TD(λ) method has been proved to be convergent with local tabular state representation. Unfortunately, proving convergence of TD(λ) with function approximation is still an important open theoretical question. This paper aims to investigate the convergence and the effects of different eligibility traces. In this paper, we adopt Sarsa(λ) learning control algorithm with a large, stochastic and dynamic simulation environment called SoccerBots. The state value function is represented by a linear approximation function known as tile coding. The performance metrics generated from the simulation system can be used to analyse the mechanism of eligibility traces.