Yunxiao Zhang, Xiaochuan Zhang, Tianlong Shen, Yuan Zhou, Zhiyuan Wang
{"title":"Feature-Option-Action: A domain adaption transfer reinforcement learning framework","authors":"Yunxiao Zhang, Xiaochuan Zhang, Tianlong Shen, Yuan Zhou, Zhiyuan Wang","doi":"10.1109/DSAA53316.2021.9564185","DOIUrl":null,"url":null,"abstract":"Transfer reinforcement learning (TRL) algorithms have achieved success on alleviating the resource-consumption and sample-insufficiency problem in reinforcement learning (RL). Existing works of cross-domain TRL mainly focus on designing a mapping between the state-action space of source and target domains. We, however, propose a novel TRL framework, Feature-Option-Action (FOA), with novel neural network architecture in this work, to avoid the design of explicit mapping functions between source and target domain. FOA learner is normally trained in the source domain, and the parameters of the option components in the neural network would then be used to initialize the learners in target domain. Empirical evidences have shown that our technique could significantly improve the performance of learners in target domains. Meanwhile, we train FOA models with the model updating methods (in our works, we call it step-update) used in Option-Critic, and illustrate that this method can improve the exploration ability of FOA models by increasing the diversity of options. We also compare step-update with other model updating methods, and the results show that step-update method performs better for FOA model to make transfer training faster and smoother.","PeriodicalId":129612,"journal":{"name":"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSAA53316.2021.9564185","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Transfer reinforcement learning (TRL) algorithms have achieved success on alleviating the resource-consumption and sample-insufficiency problem in reinforcement learning (RL). Existing works of cross-domain TRL mainly focus on designing a mapping between the state-action space of source and target domains. We, however, propose a novel TRL framework, Feature-Option-Action (FOA), with novel neural network architecture in this work, to avoid the design of explicit mapping functions between source and target domain. FOA learner is normally trained in the source domain, and the parameters of the option components in the neural network would then be used to initialize the learners in target domain. Empirical evidences have shown that our technique could significantly improve the performance of learners in target domains. Meanwhile, we train FOA models with the model updating methods (in our works, we call it step-update) used in Option-Critic, and illustrate that this method can improve the exploration ability of FOA models by increasing the diversity of options. We also compare step-update with other model updating methods, and the results show that step-update method performs better for FOA model to make transfer training faster and smoother.