J. H. Kim, I. Suh, Sang-Rok Oh, Y. J. Cho, Y. K. Chung
{"title":"Region-based Q-learning using convex clustering approach","authors":"J. H. Kim, I. Suh, Sang-Rok Oh, Y. J. Cho, Y. K. Chung","doi":"10.1109/IROS.1997.655073","DOIUrl":null,"url":null,"abstract":"For continuous state space applications, a novel method of Q-learning is proposed, where the method incorporates a region-based reward assignment being used to solve a structural credit assignment problem and a convex clustering approach to find a region with the same reward attribution property. Our learning method can estimate a current Q-value of an arbitrarily given state by using effect functions, and has the ability to learn its actions similar to that of Q-learning. Thus, our method enables robots to move smoothly in a real environment. To show the validity of our method, the proposed Q-learning method is compared with conventional Q-learning method through a simple two dimensional free space navigation problem, and visual tracking simulation results involving a 2-DOF SCARA robot are also presented.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.1997.655073","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
For continuous state space applications, a novel method of Q-learning is proposed, where the method incorporates a region-based reward assignment being used to solve a structural credit assignment problem and a convex clustering approach to find a region with the same reward attribution property. Our learning method can estimate a current Q-value of an arbitrarily given state by using effect functions, and has the ability to learn its actions similar to that of Q-learning. Thus, our method enables robots to move smoothly in a real environment. To show the validity of our method, the proposed Q-learning method is compared with conventional Q-learning method through a simple two dimensional free space navigation problem, and visual tracking simulation results involving a 2-DOF SCARA robot are also presented.