A novel method for solving dynamic flexible job-shop scheduling problem via DIFFormer and deep reinforcement learning

IF 6.7 1区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Lanjun Wan , Xueyan Cui , Haoxin Zhao , Long Fu , Changyun Li
{"title":"A novel method for solving dynamic flexible job-shop scheduling problem via DIFFormer and deep reinforcement learning","authors":"Lanjun Wan ,&nbsp;Xueyan Cui ,&nbsp;Haoxin Zhao ,&nbsp;Long Fu ,&nbsp;Changyun Li","doi":"10.1016/j.cie.2024.110688","DOIUrl":null,"url":null,"abstract":"<div><div>Due to the dynamic changes of manufacturing environments, heuristic scheduling rules are unstable in dynamic scheduling. Although meta-heuristic methods provide the best scheduling quality, their solution efficiency is limited by the scale of the problem. Therefore, a novel method for solving the dynamic flexible job-shop scheduling problem (DFJSP) via diffusion-based transformer (DIFFormer) and deep reinforcement learning (D-DRL) is proposed. Firstly, the DFJSP is modeled as a Markov decision process, where the state space is constructed in the form of the heterogeneous graph and the reward function is designed to minimize the makespan and maximize the machine utilization rate. Secondly, DIFFormer is used to encode the operation and machine nodes to better capture the complex dependencies between nodes, which can effectively improve the representation ability of the model. Thirdly, a selective rescheduling strategy is designed for dynamic events to enhance the solution quality of DFJSP. Fourthly, the twin delayed deep deterministic policy gradient (TD3) algorithm is adopted for training an efficient scheduling model. Finally, the effectiveness of the proposed D-DRL is validated through a series of experiments. The results indicate that D-DRL achieves better solution quality and higher solution efficiency when solving DFJSP instances.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"198 ","pages":"Article 110688"},"PeriodicalIF":6.7000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Industrial Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0360835224008106","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Due to the dynamic changes of manufacturing environments, heuristic scheduling rules are unstable in dynamic scheduling. Although meta-heuristic methods provide the best scheduling quality, their solution efficiency is limited by the scale of the problem. Therefore, a novel method for solving the dynamic flexible job-shop scheduling problem (DFJSP) via diffusion-based transformer (DIFFormer) and deep reinforcement learning (D-DRL) is proposed. Firstly, the DFJSP is modeled as a Markov decision process, where the state space is constructed in the form of the heterogeneous graph and the reward function is designed to minimize the makespan and maximize the machine utilization rate. Secondly, DIFFormer is used to encode the operation and machine nodes to better capture the complex dependencies between nodes, which can effectively improve the representation ability of the model. Thirdly, a selective rescheduling strategy is designed for dynamic events to enhance the solution quality of DFJSP. Fourthly, the twin delayed deep deterministic policy gradient (TD3) algorithm is adopted for training an efficient scheduling model. Finally, the effectiveness of the proposed D-DRL is validated through a series of experiments. The results indicate that D-DRL achieves better solution quality and higher solution efficiency when solving DFJSP instances.
通过 DIFFormer 和深度强化学习解决动态灵活作业调度问题的新方法
由于生产环境的动态变化,启发式调度规则在动态调度中并不稳定。虽然元启发式方法能提供最好的调度质量,但其求解效率受到问题规模的限制。因此,本文提出了一种通过基于扩散的变换器(DIFFormer)和深度强化学习(D-DRL)解决动态柔性作业车间调度问题(DFJSP)的新方法。首先,DFJSP 被建模为马尔可夫决策过程,其中状态空间以异构图的形式构建,奖励函数的设计目标是最小化作业时间跨度和最大化机器利用率。其次,使用 DIFFormer 对操作和机器节点进行编码,以更好地捕捉节点之间的复杂依赖关系,从而有效提高模型的表示能力。第三,针对动态事件设计了选择性重新安排策略,以提高 DFJSP 的求解质量。第四,采用孪生延迟深度确定性策略梯度(TD3)算法训练高效调度模型。最后,通过一系列实验验证了所提出的 D-DRL 的有效性。结果表明,在求解 DFJSP 实例时,D-DRL 能获得更好的求解质量和更高的求解效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers & Industrial Engineering
Computers & Industrial Engineering 工程技术-工程:工业
CiteScore
12.70
自引率
12.70%
发文量
794
审稿时长
10.6 months
期刊介绍: Computers & Industrial Engineering (CAIE) is dedicated to researchers, educators, and practitioners in industrial engineering and related fields. Pioneering the integration of computers in research, education, and practice, industrial engineering has evolved to make computers and electronic communication integral to its domain. CAIE publishes original contributions focusing on the development of novel computerized methodologies to address industrial engineering problems. It also highlights the applications of these methodologies to issues within the broader industrial engineering and associated communities. The journal actively encourages submissions that push the boundaries of fundamental theories and concepts in industrial engineering techniques.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信