A deep reinforcement learning based multiple meta-heuristic methods approach for resource constrained multi-project scheduling problem

Ziting Han, Yanlan Yang, Hua Ye
{"title":"A deep reinforcement learning based multiple meta-heuristic methods approach for resource constrained multi-project scheduling problem","authors":"Ziting Han, Yanlan Yang, Hua Ye","doi":"10.1109/ICSP54964.2022.9778702","DOIUrl":null,"url":null,"abstract":"In order to solve resource-constrained multi-project scheduling problem (RCMPSP) more efficiently, this paper proposes a deep reinforcement learning algorithm based multiple meta-heuristic methods that combines the advantages of discrete cuckoo search (DCS) and particle swarm optimization (PSO). In the process of population evolution, Deep reinforcement learning was used to select the most suitable meta-heuristic algorithm (DCS and PSO) according to the diversity and quality of the population, and the reward was designed according to the evolution effect, so as to guide the algorithm to update the solutions more effectively and find the optimal solution quickly. In addition, the key steps of the CS algorithm, Levy flight and random walk, are redefined in this paper. The task movement, reverse mutation, task list recombination and adaptive and repairable swap mutation are used to make it suitable for solving discrete RCMPSP problems and improve the convergence speed of DCS algorithm. Experimental results on the latest data set (MPLIB) demonstrate the effectiveness of the proposed algorithm.","PeriodicalId":363766,"journal":{"name":"2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSP54964.2022.9778702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In order to solve resource-constrained multi-project scheduling problem (RCMPSP) more efficiently, this paper proposes a deep reinforcement learning algorithm based multiple meta-heuristic methods that combines the advantages of discrete cuckoo search (DCS) and particle swarm optimization (PSO). In the process of population evolution, Deep reinforcement learning was used to select the most suitable meta-heuristic algorithm (DCS and PSO) according to the diversity and quality of the population, and the reward was designed according to the evolution effect, so as to guide the algorithm to update the solutions more effectively and find the optimal solution quickly. In addition, the key steps of the CS algorithm, Levy flight and random walk, are redefined in this paper. The task movement, reverse mutation, task list recombination and adaptive and repairable swap mutation are used to make it suitable for solving discrete RCMPSP problems and improve the convergence speed of DCS algorithm. Experimental results on the latest data set (MPLIB) demonstrate the effectiveness of the proposed algorithm.
基于深度强化学习的多元启发式方法研究资源约束下的多项目调度问题
为了更有效地解决资源约束下的多项目调度问题(RCMPSP),结合离散布谷鸟搜索(DCS)和粒子群优化(PSO)的优点,提出了一种基于深度强化学习的多元启发式算法。在种群进化过程中,根据种群的多样性和质量,利用深度强化学习选择最适合的元启发式算法(DCS和PSO),并根据进化效果设计奖励,引导算法更有效地更新解,快速找到最优解。此外,本文对CS算法的关键步骤Levy飞行和随机漫步进行了重新定义。采用任务移动、反向突变、任务列表重组和自适应可修复交换突变,使其适用于求解离散RCMPSP问题,提高了DCS算法的收敛速度。在最新数据集(MPLIB)上的实验结果证明了该算法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信