Emir Arditi, Tjaša Kunavar, Emre Ugur, J. Babič, E. Oztop
{"title":"基于奖励参数搜索和策略梯度强化学习的成本函数推断","authors":"Emir Arditi, Tjaša Kunavar, Emre Ugur, J. Babič, E. Oztop","doi":"10.1109/IECON48115.2021.9589967","DOIUrl":null,"url":null,"abstract":"This study focuses on inferring cost functions of obtained movement data using reward parameter search and pol-icy gradient based Reinforcement Learning (RL). The behavior data for this task is obtained through a series of squat-to-stand movements of human participants under dynamic perturbations. The key parameter searched in the cost function is the weight of total torque used in performing the squat-to-stand action. An approximate model is used to learn squat-to-stand movements via a policy gradient method, namely Proximal Policy Optimization(PPO). A behavioral similarity metric based on Center of Mass(COM) is used to find the most likely weight parameter. The stochasticity in the training result of PPO is dealt with multiple runs, and as a result, a reasonable and a stable Inverse Reinforcement Learning(IRL) algorithm is obtained in terms of performance. The results indicate that for some participants, the reward function parameters of the experts were inferred successfully.","PeriodicalId":443337,"journal":{"name":"IECON 2021 – 47th Annual Conference of the IEEE Industrial Electronics Society","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Inferring Cost Functions Using Reward Parameter Search and Policy Gradient Reinforcement Learning\",\"authors\":\"Emir Arditi, Tjaša Kunavar, Emre Ugur, J. Babič, E. Oztop\",\"doi\":\"10.1109/IECON48115.2021.9589967\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study focuses on inferring cost functions of obtained movement data using reward parameter search and pol-icy gradient based Reinforcement Learning (RL). The behavior data for this task is obtained through a series of squat-to-stand movements of human participants under dynamic perturbations. The key parameter searched in the cost function is the weight of total torque used in performing the squat-to-stand action. An approximate model is used to learn squat-to-stand movements via a policy gradient method, namely Proximal Policy Optimization(PPO). A behavioral similarity metric based on Center of Mass(COM) is used to find the most likely weight parameter. The stochasticity in the training result of PPO is dealt with multiple runs, and as a result, a reasonable and a stable Inverse Reinforcement Learning(IRL) algorithm is obtained in terms of performance. The results indicate that for some participants, the reward function parameters of the experts were inferred successfully.\",\"PeriodicalId\":443337,\"journal\":{\"name\":\"IECON 2021 – 47th Annual Conference of the IEEE Industrial Electronics Society\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IECON 2021 – 47th Annual Conference of the IEEE Industrial Electronics Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IECON48115.2021.9589967\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IECON 2021 – 47th Annual Conference of the IEEE Industrial Electronics Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IECON48115.2021.9589967","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Inferring Cost Functions Using Reward Parameter Search and Policy Gradient Reinforcement Learning
This study focuses on inferring cost functions of obtained movement data using reward parameter search and pol-icy gradient based Reinforcement Learning (RL). The behavior data for this task is obtained through a series of squat-to-stand movements of human participants under dynamic perturbations. The key parameter searched in the cost function is the weight of total torque used in performing the squat-to-stand action. An approximate model is used to learn squat-to-stand movements via a policy gradient method, namely Proximal Policy Optimization(PPO). A behavioral similarity metric based on Center of Mass(COM) is used to find the most likely weight parameter. The stochasticity in the training result of PPO is dealt with multiple runs, and as a result, a reasonable and a stable Inverse Reinforcement Learning(IRL) algorithm is obtained in terms of performance. The results indicate that for some participants, the reward function parameters of the experts were inferred successfully.