针对资源受限的动态 seru 调度问题的带状包装构造算法与深度强化学习

IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yiran Xiang, Zhe Zhang, Xue Gong, Xiaoling Song, Yong Yin
{"title":"针对资源受限的动态 seru 调度问题的带状包装构造算法与深度强化学习","authors":"Yiran Xiang, Zhe Zhang, Xue Gong, Xiaoling Song, Yong Yin","doi":"10.1007/s00500-024-09815-8","DOIUrl":null,"url":null,"abstract":"<p>This study focuses on unspecified dynamic <i>seru</i> scheduling problems with resource constraints (UDSS-R) in <i>seru</i> production system (SPS). A mixed integer linear programming model is formulated to minimize the <i>makespan</i>, which is solved sequentially from both allocation and scheduling perspectives by a strip-packing constructive algorithm (SPCA) with deep reinforcement learning (DRL). The training samples are trained by the DRL model, and the reward values obtained are calculated by SPCA to train the network so that the agent can find a better solution. The output of DRL is the scheduling order of jobs in <i>serus</i>, while the solution of UDSS-R is solved by SPCA. Finally, a set of test instances are generated to conduct computational experiments with different instance scales for the DRL-SPCA, and the results confirm the effectiveness of proposed DRL-SPCA in solving UDSS-R with more outstanding performance in terms of solution quality and efficiency, across three data scales (10 <i>serus</i> × 100 jobs, 20 <i>serus</i> × 250 jobs, and 30 <i>serus</i> × 400 jobs), compared with GA and SAA, the <i>Avg. RPD</i> of DRL-SPCA decreased by 9.93% and 7.56%, 13.36% and 10.72%, and 9.09% and 7.08%, respectively. In addition, the <i>Avg. CPU time</i> was reduced by 29.53% and 27.93%, 57.48% and 57.04%, and 61.73% and 61.76%, respectively.</p>","PeriodicalId":22039,"journal":{"name":"Soft Computing","volume":"47 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A strip-packing constructive algorithm with deep reinforcement learning for dynamic resource-constrained seru scheduling problems\",\"authors\":\"Yiran Xiang, Zhe Zhang, Xue Gong, Xiaoling Song, Yong Yin\",\"doi\":\"10.1007/s00500-024-09815-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This study focuses on unspecified dynamic <i>seru</i> scheduling problems with resource constraints (UDSS-R) in <i>seru</i> production system (SPS). A mixed integer linear programming model is formulated to minimize the <i>makespan</i>, which is solved sequentially from both allocation and scheduling perspectives by a strip-packing constructive algorithm (SPCA) with deep reinforcement learning (DRL). The training samples are trained by the DRL model, and the reward values obtained are calculated by SPCA to train the network so that the agent can find a better solution. The output of DRL is the scheduling order of jobs in <i>serus</i>, while the solution of UDSS-R is solved by SPCA. Finally, a set of test instances are generated to conduct computational experiments with different instance scales for the DRL-SPCA, and the results confirm the effectiveness of proposed DRL-SPCA in solving UDSS-R with more outstanding performance in terms of solution quality and efficiency, across three data scales (10 <i>serus</i> × 100 jobs, 20 <i>serus</i> × 250 jobs, and 30 <i>serus</i> × 400 jobs), compared with GA and SAA, the <i>Avg. RPD</i> of DRL-SPCA decreased by 9.93% and 7.56%, 13.36% and 10.72%, and 9.09% and 7.08%, respectively. In addition, the <i>Avg. CPU time</i> was reduced by 29.53% and 27.93%, 57.48% and 57.04%, and 61.73% and 61.76%, respectively.</p>\",\"PeriodicalId\":22039,\"journal\":{\"name\":\"Soft Computing\",\"volume\":\"47 1\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-07-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Soft Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00500-024-09815-8\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00500-024-09815-8","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

本研究的重点是血清生产系统(SPS)中具有资源约束的非指定动态血清调度问题(UDSS-R)。为了最小化工期,建立了一个混合整数线性规划模型,并通过带深度强化学习(DRL)的条状包装构造算法(SPCA)从分配和调度两个角度依次求解。训练样本由 DRL 模型训练,获得的奖励值由 SPCA 计算,以训练网络,从而使代理找到更好的解决方案。DRL 的输出是 serus 中工作的调度顺序,而 UDSS-R 的解则由 SPCA 解决。最后,生成了一组测试实例,对 DRL-SPCA 进行了不同实例规模的计算实验,结果证实了所提出的 DRL-SPCA 在求解 UDSS-R 时的有效性,在三种数据规模(10 serus × 100 个作业、20 serus × 250 个作业和 30 serus × 400 个作业)下,与 GA 和 SAA 相比,DRL-SPCA 的 Avg.与 GA 和 SAA 相比,DRL-SPCA 的平均 RPD 分别下降了 9.93% 和 7.56%,13.36% 和 10.72%,以及 9.09% 和 7.08%。此外,平均 CPU 时间减少了 29.53%。CPU 时间分别减少了 29.53% 和 27.93%,57.48% 和 57.04%,以及 61.73% 和 61.76%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A strip-packing constructive algorithm with deep reinforcement learning for dynamic resource-constrained seru scheduling problems

A strip-packing constructive algorithm with deep reinforcement learning for dynamic resource-constrained seru scheduling problems

This study focuses on unspecified dynamic seru scheduling problems with resource constraints (UDSS-R) in seru production system (SPS). A mixed integer linear programming model is formulated to minimize the makespan, which is solved sequentially from both allocation and scheduling perspectives by a strip-packing constructive algorithm (SPCA) with deep reinforcement learning (DRL). The training samples are trained by the DRL model, and the reward values obtained are calculated by SPCA to train the network so that the agent can find a better solution. The output of DRL is the scheduling order of jobs in serus, while the solution of UDSS-R is solved by SPCA. Finally, a set of test instances are generated to conduct computational experiments with different instance scales for the DRL-SPCA, and the results confirm the effectiveness of proposed DRL-SPCA in solving UDSS-R with more outstanding performance in terms of solution quality and efficiency, across three data scales (10 serus × 100 jobs, 20 serus × 250 jobs, and 30 serus × 400 jobs), compared with GA and SAA, the Avg. RPD of DRL-SPCA decreased by 9.93% and 7.56%, 13.36% and 10.72%, and 9.09% and 7.08%, respectively. In addition, the Avg. CPU time was reduced by 29.53% and 27.93%, 57.48% and 57.04%, and 61.73% and 61.76%, respectively.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Soft Computing
Soft Computing 工程技术-计算机:跨学科应用
CiteScore
8.10
自引率
9.80%
发文量
927
审稿时长
7.3 months
期刊介绍: Soft Computing is dedicated to system solutions based on soft computing techniques. It provides rapid dissemination of important results in soft computing technologies, a fusion of research in evolutionary algorithms and genetic programming, neural science and neural net systems, fuzzy set theory and fuzzy systems, and chaos theory and chaotic systems. Soft Computing encourages the integration of soft computing techniques and tools into both everyday and advanced applications. By linking the ideas and techniques of soft computing with other disciplines, the journal serves as a unifying platform that fosters comparisons, extensions, and new applications. As a result, the journal is an international forum for all scientists and engineers engaged in research and development in this fast growing field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信