用课程驱动的连续 DQN 扩展缓解自适应列车调度中的稳定性-弹性困境

Achref Jaziri, Etienne Künzel, Visvanathan Ramesh
{"title":"用课程驱动的连续 DQN 扩展缓解自适应列车调度中的稳定性-弹性困境","authors":"Achref Jaziri, Etienne Künzel, Visvanathan Ramesh","doi":"arxiv-2408.09838","DOIUrl":null,"url":null,"abstract":"A continual learning agent builds on previous experiences to develop\nincreasingly complex behaviors by adapting to non-stationary and dynamic\nenvironments while preserving previously acquired knowledge. However, scaling\nthese systems presents significant challenges, particularly in balancing the\npreservation of previous policies with the adaptation of new ones to current\nenvironments. This balance, known as the stability-plasticity dilemma, is\nespecially pronounced in complex multi-agent domains such as the train\nscheduling problem, where environmental and agent behaviors are constantly\nchanging, and the search space is vast. In this work, we propose addressing\nthese challenges in the train scheduling problem using curriculum learning. We\ndesign a curriculum with adjacent skills that build on each other to improve\ngeneralization performance. Introducing a curriculum with distinct tasks\nintroduces non-stationarity, which we address by proposing a new algorithm:\nContinual Deep Q-Network (DQN) Expansion (CDE). Our approach dynamically\ngenerates and adjusts Q-function subspaces to handle environmental changes and\ntask requirements. CDE mitigates catastrophic forgetting through EWC while\nensuring high plasticity using adaptive rational activation functions.\nExperimental results demonstrate significant improvements in learning\nefficiency and adaptability compared to RL baselines and other adapted methods\nfor continual learning, highlighting the potential of our method in managing\nthe stability-plasticity dilemma in the adaptive train scheduling setting.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mitigating the Stability-Plasticity Dilemma in Adaptive Train Scheduling with Curriculum-Driven Continual DQN Expansion\",\"authors\":\"Achref Jaziri, Etienne Künzel, Visvanathan Ramesh\",\"doi\":\"arxiv-2408.09838\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A continual learning agent builds on previous experiences to develop\\nincreasingly complex behaviors by adapting to non-stationary and dynamic\\nenvironments while preserving previously acquired knowledge. However, scaling\\nthese systems presents significant challenges, particularly in balancing the\\npreservation of previous policies with the adaptation of new ones to current\\nenvironments. This balance, known as the stability-plasticity dilemma, is\\nespecially pronounced in complex multi-agent domains such as the train\\nscheduling problem, where environmental and agent behaviors are constantly\\nchanging, and the search space is vast. In this work, we propose addressing\\nthese challenges in the train scheduling problem using curriculum learning. We\\ndesign a curriculum with adjacent skills that build on each other to improve\\ngeneralization performance. Introducing a curriculum with distinct tasks\\nintroduces non-stationarity, which we address by proposing a new algorithm:\\nContinual Deep Q-Network (DQN) Expansion (CDE). Our approach dynamically\\ngenerates and adjusts Q-function subspaces to handle environmental changes and\\ntask requirements. CDE mitigates catastrophic forgetting through EWC while\\nensuring high plasticity using adaptive rational activation functions.\\nExperimental results demonstrate significant improvements in learning\\nefficiency and adaptability compared to RL baselines and other adapted methods\\nfor continual learning, highlighting the potential of our method in managing\\nthe stability-plasticity dilemma in the adaptive train scheduling setting.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.09838\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09838","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

持续学习型代理以先前的经验为基础,通过适应非稳态和动态环境来发展越来越复杂的行为,同时保留先前获得的知识。然而,这些系统的扩展面临着巨大的挑战,尤其是如何在保留以前的政策与适应当前环境的新政策之间取得平衡。这种平衡被称为 "稳定性-可塑性困境",在火车调度问题等复杂的多代理领域尤为突出,因为在这些领域中,环境和代理行为不断变化,搜索空间巨大。在这项工作中,我们建议利用课程学习来解决火车调度问题中的这些难题。我们设计的课程包含相邻的技能,这些技能相互促进,从而提高泛化性能。引入具有不同任务的课程会带来非稳定性,我们提出了一种新算法:连续深度 Q 网络(DQN)扩展(CDE)来解决这一问题。我们的方法可动态生成和调整 Q 函数子空间,以应对环境变化和任务要求。实验结果表明,与 RL 基线和其他适应持续学习的方法相比,我们的方法在学习效率和适应性方面都有显著提高,这凸显了我们的方法在自适应列车调度设置中处理稳定性和可塑性两难问题的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Mitigating the Stability-Plasticity Dilemma in Adaptive Train Scheduling with Curriculum-Driven Continual DQN Expansion
A continual learning agent builds on previous experiences to develop increasingly complex behaviors by adapting to non-stationary and dynamic environments while preserving previously acquired knowledge. However, scaling these systems presents significant challenges, particularly in balancing the preservation of previous policies with the adaptation of new ones to current environments. This balance, known as the stability-plasticity dilemma, is especially pronounced in complex multi-agent domains such as the train scheduling problem, where environmental and agent behaviors are constantly changing, and the search space is vast. In this work, we propose addressing these challenges in the train scheduling problem using curriculum learning. We design a curriculum with adjacent skills that build on each other to improve generalization performance. Introducing a curriculum with distinct tasks introduces non-stationarity, which we address by proposing a new algorithm: Continual Deep Q-Network (DQN) Expansion (CDE). Our approach dynamically generates and adjusts Q-function subspaces to handle environmental changes and task requirements. CDE mitigates catastrophic forgetting through EWC while ensuring high plasticity using adaptive rational activation functions. Experimental results demonstrate significant improvements in learning efficiency and adaptability compared to RL baselines and other adapted methods for continual learning, highlighting the potential of our method in managing the stability-plasticity dilemma in the adaptive train scheduling setting.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信