{"title":"大型离散动作空间中强化学习的联合动作表示和优先经验重放","authors":"Xueyu Wei, Wei Xue, Wei Zhao, Yuanxia Shen, Gaohang Yu","doi":"10.1145/3583788.3583802","DOIUrl":null,"url":null,"abstract":"In dealing with the large discrete action spaces, a joint action representation and prioritized experience replay method is proposed in this paper, which consists of three modules. In the first module, we use the k-nearest neighbor method to reduce the dimensionality of the original action space, generating a compact action space, and then the critic network is introduced to further evaluate and filter this compact space to obtain the optimal action. Note that the optimal action may have inconsistency with the actual desired action. Then in the second module, we introduce a multi-step update technique to reduce the training variance when storing data in the replay buffer. In the third module, considering the existence of correlation between samples when sampling data, we assign the corresponding weight to the sample experience by calculating the absolute value of temporal difference error and use such a non-uniform sampling method to prioritize the samples for sampling. Experimental results on four benchmark environments demonstrate the effectiveness and efficiency of the proposed method in dealing with the large discrete action spaces.","PeriodicalId":292167,"journal":{"name":"Proceedings of the 2023 7th International Conference on Machine Learning and Soft Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Joint Action Representation and Prioritized Experience Replay for Reinforcement Learning in Large Discrete Action Spaces\",\"authors\":\"Xueyu Wei, Wei Xue, Wei Zhao, Yuanxia Shen, Gaohang Yu\",\"doi\":\"10.1145/3583788.3583802\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In dealing with the large discrete action spaces, a joint action representation and prioritized experience replay method is proposed in this paper, which consists of three modules. In the first module, we use the k-nearest neighbor method to reduce the dimensionality of the original action space, generating a compact action space, and then the critic network is introduced to further evaluate and filter this compact space to obtain the optimal action. Note that the optimal action may have inconsistency with the actual desired action. Then in the second module, we introduce a multi-step update technique to reduce the training variance when storing data in the replay buffer. In the third module, considering the existence of correlation between samples when sampling data, we assign the corresponding weight to the sample experience by calculating the absolute value of temporal difference error and use such a non-uniform sampling method to prioritize the samples for sampling. Experimental results on four benchmark environments demonstrate the effectiveness and efficiency of the proposed method in dealing with the large discrete action spaces.\",\"PeriodicalId\":292167,\"journal\":{\"name\":\"Proceedings of the 2023 7th International Conference on Machine Learning and Soft Computing\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 7th International Conference on Machine Learning and Soft Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3583788.3583802\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 7th International Conference on Machine Learning and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3583788.3583802","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Joint Action Representation and Prioritized Experience Replay for Reinforcement Learning in Large Discrete Action Spaces
In dealing with the large discrete action spaces, a joint action representation and prioritized experience replay method is proposed in this paper, which consists of three modules. In the first module, we use the k-nearest neighbor method to reduce the dimensionality of the original action space, generating a compact action space, and then the critic network is introduced to further evaluate and filter this compact space to obtain the optimal action. Note that the optimal action may have inconsistency with the actual desired action. Then in the second module, we introduce a multi-step update technique to reduce the training variance when storing data in the replay buffer. In the third module, considering the existence of correlation between samples when sampling data, we assign the corresponding weight to the sample experience by calculating the absolute value of temporal difference error and use such a non-uniform sampling method to prioritize the samples for sampling. Experimental results on four benchmark environments demonstrate the effectiveness and efficiency of the proposed method in dealing with the large discrete action spaces.