{"title":"Customized learning algorithms for episodic tasks with acyclic state spaces","authors":"T. Bountourelis, S. Reveliotis","doi":"10.1109/COASE.2009.5234189","DOIUrl":null,"url":null,"abstract":"The work presented in this paper provides a practical, customized learning algorithm for reinforcement learning tasks that evolve episodically over acyclic state spaces. The presented results are motivated by the Optimal Disassembly Planning (ODP) problem described in [14], and they complement and enhance some earlier developments on this problem that were presented in [15]. In particular, the proposed algorithm is shown to be a substantial improvement of the original algorithm developed in [15], in terms of, both, the involved computational effort and the attained performance, where the latter is measured by the accumulated reward. The new algorithm also leads to a robust performance gain over the typical Q-learning implementations for the considered problem context.","PeriodicalId":386046,"journal":{"name":"2009 IEEE International Conference on Automation Science and Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE International Conference on Automation Science and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COASE.2009.5234189","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The work presented in this paper provides a practical, customized learning algorithm for reinforcement learning tasks that evolve episodically over acyclic state spaces. The presented results are motivated by the Optimal Disassembly Planning (ODP) problem described in [14], and they complement and enhance some earlier developments on this problem that were presented in [15]. In particular, the proposed algorithm is shown to be a substantial improvement of the original algorithm developed in [15], in terms of, both, the involved computational effort and the attained performance, where the latter is measured by the accumulated reward. The new algorithm also leads to a robust performance gain over the typical Q-learning implementations for the considered problem context.