{"title":"Control of FES using reinforcement learning: accelerating the learning rate","authors":"A. Thrasher, B. Andrews, F. Wang","doi":"10.1109/IEMBS.1997.757070","DOIUrl":null,"url":null,"abstract":"Prior knowledge can be used to accelerate the process of reinforcement learning. An adaptive fuzzy logic controller designed to control the swing phase of paraplegic gait was trained on a computer model using reinforcement learning. Instead of starting from scratch with generic fuzzy rules, the controller was jump-started in two different ways with experienced rules. First, supervised learning was used to initially train the controller, then two system parameters were altered and the reinforcement learning algorithm proceeded to find an optimal solution. This required a total of 34 simulation cycles. The same task, using reinforcement learning alone, required almost 150 cycles. Second, the trained controller was transferred to two individuals of differing body mass and height. It required less than 20 additional cycles to converge in both cases. By placing the controller initially closer to an optimal solution, jump-starting greatly reduces the number of simulation cycles required.","PeriodicalId":342750,"journal":{"name":"Proceedings of the 19th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 'Magnificent Milestones and Emerging Opportunities in Medical Engineering' (Cat. No.97CH36136)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1997-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 'Magnificent Milestones and Emerging Opportunities in Medical Engineering' (Cat. No.97CH36136)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEMBS.1997.757070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Prior knowledge can be used to accelerate the process of reinforcement learning. An adaptive fuzzy logic controller designed to control the swing phase of paraplegic gait was trained on a computer model using reinforcement learning. Instead of starting from scratch with generic fuzzy rules, the controller was jump-started in two different ways with experienced rules. First, supervised learning was used to initially train the controller, then two system parameters were altered and the reinforcement learning algorithm proceeded to find an optimal solution. This required a total of 34 simulation cycles. The same task, using reinforcement learning alone, required almost 150 cycles. Second, the trained controller was transferred to two individuals of differing body mass and height. It required less than 20 additional cycles to converge in both cases. By placing the controller initially closer to an optimal solution, jump-starting greatly reduces the number of simulation cycles required.