Zijian Gao, Kele Xu, Hongda Jia, Tianjiao Wan, Bo Ding, Dawei Feng, Xinjun Mao, Huaimin Wang
{"title":"基于内在奖励的互补学习系统在强化学习中的应用","authors":"Zijian Gao, Kele Xu, Hongda Jia, Tianjiao Wan, Bo Ding, Dawei Feng, Xinjun Mao, Huaimin Wang","doi":"10.1109/ICASSP49357.2023.10095379","DOIUrl":null,"url":null,"abstract":"Deep reinforcement learning has achieved encouraging performance in many realms. However, one of its primary challenges is the sparsity of extrinsic rewards, which is still far from solved. Complementary learning system theory suggests that effective human learning relies on two complementary learning systems utilizing short-term and long-term memories. Inspired by the fact that humans evaluate curiosity by comparing current observations with historical information, we propose a novel intrinsic reward, namely CLS-IR, which aims to address the problems caused by sparse extrinsic rewards. Specifically, we train a self-supervised predictive model with short-term and long-term memories via exponential moving averages. We employ the information gain between the two memories as the intrinsic reward, which does not incur additional training costs but leads to better exploration. To investigate the effectiveness of CLS-IR, we conduct extensive experimental evaluations; the results demonstrate that CLS-IR can achieve state-of-the-art performance on Atari games and DeepMind Control Suite.","PeriodicalId":113072,"journal":{"name":"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Complementary Learning System Based Intrinsic Reward in Reinforcement Learning\",\"authors\":\"Zijian Gao, Kele Xu, Hongda Jia, Tianjiao Wan, Bo Ding, Dawei Feng, Xinjun Mao, Huaimin Wang\",\"doi\":\"10.1109/ICASSP49357.2023.10095379\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep reinforcement learning has achieved encouraging performance in many realms. However, one of its primary challenges is the sparsity of extrinsic rewards, which is still far from solved. Complementary learning system theory suggests that effective human learning relies on two complementary learning systems utilizing short-term and long-term memories. Inspired by the fact that humans evaluate curiosity by comparing current observations with historical information, we propose a novel intrinsic reward, namely CLS-IR, which aims to address the problems caused by sparse extrinsic rewards. Specifically, we train a self-supervised predictive model with short-term and long-term memories via exponential moving averages. We employ the information gain between the two memories as the intrinsic reward, which does not incur additional training costs but leads to better exploration. To investigate the effectiveness of CLS-IR, we conduct extensive experimental evaluations; the results demonstrate that CLS-IR can achieve state-of-the-art performance on Atari games and DeepMind Control Suite.\",\"PeriodicalId\":113072,\"journal\":{\"name\":\"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICASSP49357.2023.10095379\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP49357.2023.10095379","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
深度强化学习在许多领域取得了令人鼓舞的成绩。然而,其主要挑战之一是外部奖励的稀缺性,这一问题仍远未解决。互补学习系统理论认为,有效的人类学习依赖于利用短期和长期记忆的两个互补的学习系统。受人类通过比较当前观察结果与历史信息来评估好奇心这一事实的启发,我们提出了一种新的内在奖励,即CLS-IR,旨在解决稀疏的外在奖励所带来的问题。具体来说,我们通过指数移动平均训练了一个具有短期和长期记忆的自监督预测模型。我们将两个记忆之间的信息增益作为内在奖励,这不会产生额外的训练成本,但会导致更好的探索。为了研究CLS-IR的有效性,我们进行了广泛的实验评估;结果表明,CLS-IR可以在雅达利游戏和DeepMind Control Suite上实现最先进的性能。
Complementary Learning System Based Intrinsic Reward in Reinforcement Learning
Deep reinforcement learning has achieved encouraging performance in many realms. However, one of its primary challenges is the sparsity of extrinsic rewards, which is still far from solved. Complementary learning system theory suggests that effective human learning relies on two complementary learning systems utilizing short-term and long-term memories. Inspired by the fact that humans evaluate curiosity by comparing current observations with historical information, we propose a novel intrinsic reward, namely CLS-IR, which aims to address the problems caused by sparse extrinsic rewards. Specifically, we train a self-supervised predictive model with short-term and long-term memories via exponential moving averages. We employ the information gain between the two memories as the intrinsic reward, which does not incur additional training costs but leads to better exploration. To investigate the effectiveness of CLS-IR, we conduct extensive experimental evaluations; the results demonstrate that CLS-IR can achieve state-of-the-art performance on Atari games and DeepMind Control Suite.