Qiangxing Tian, Jinxin Liu, Guanchu Wang, Donglin Wang
{"title":"Unsupervised Discovery of Transitional Skills for Deep Reinforcement Learning","authors":"Qiangxing Tian, Jinxin Liu, Guanchu Wang, Donglin Wang","doi":"10.1109/IJCNN52387.2021.9533820","DOIUrl":null,"url":null,"abstract":"By maximizing an information theoretic objective, a few recent methods empower the agent to explore the environment and learn skills without extrinsic reward. However, when considering using multiple consecutive skills to complete a specific task, the transition from one to another cannot guarantee the success of the process due to the evident gap between skills. In this paper, we propose a novel unsupervised reinforcement learning approach to learn transitional skills in addition to pursuing diverse primitive skills. By introducing an extra latent variable for exploring the dependence between skills, our method discovers both primitive and transitional skills by optimizing a novel information theoretic objective. Considering various robotic tasks, our results demonstrate the effectiveness on learning both diverse primitive skills and transitional skills, and further exhibit the superiority of our method in smooth transition of skills over the baselines. Videos of transitional skills can be found on the project website: https://sites.google.com/view/udts-skill.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN52387.2021.9533820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
By maximizing an information theoretic objective, a few recent methods empower the agent to explore the environment and learn skills without extrinsic reward. However, when considering using multiple consecutive skills to complete a specific task, the transition from one to another cannot guarantee the success of the process due to the evident gap between skills. In this paper, we propose a novel unsupervised reinforcement learning approach to learn transitional skills in addition to pursuing diverse primitive skills. By introducing an extra latent variable for exploring the dependence between skills, our method discovers both primitive and transitional skills by optimizing a novel information theoretic objective. Considering various robotic tasks, our results demonstrate the effectiveness on learning both diverse primitive skills and transitional skills, and further exhibit the superiority of our method in smooth transition of skills over the baselines. Videos of transitional skills can be found on the project website: https://sites.google.com/view/udts-skill.