Job shop scheduling by Deep Dual-Q Network with Prioritized Experience Replay for resilient production control in flexible manufacturing system

IF 4.3 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Chao Liu , Kai Chen , Hao Wang , Baojun Yang , Jiewu Leng
{"title":"Job shop scheduling by Deep Dual-Q Network with Prioritized Experience Replay for resilient production control in flexible manufacturing system","authors":"Chao Liu ,&nbsp;Kai Chen ,&nbsp;Hao Wang ,&nbsp;Baojun Yang ,&nbsp;Jiewu Leng","doi":"10.1016/j.cor.2025.107190","DOIUrl":null,"url":null,"abstract":"<div><div>The increasing demand for mass customization and intensifying market competition have made the production cycles shorter and the product iterations faster. As a result, most products are in small and medium batches, introducing both opportunities and challenges to conventional job scheduling. The resilient production control in flexible manufacturing system focuses on creating adaptive and sustainable systems that present the characteristics of multi-product-type variant-volume discrete-flow mixed-flow production, which is still challenging to balance flexibility and efficiency in production. In this paper, a Deep Dual-Q Network with Prioritized Experience Replay (DDQN-PER) approach is proposed to solve the job shop scheduling problem (JSSP). It combines the advantages of the Dueling and Double DQN architecture, utilizing prioritized replay and neural networks to approximate state–action (Q). To extract and store experience data from the experience memory more efficiently, the states of shop environment are represented as information matrices. The two-phase algorithm, comprising iterative offline training and online application (OTOA), trains scheduling policies, forming a dynamic closed loop between offline scheduling results and online real-time production control. Case study and downtime experiments conducted on key machines validate the superiority of the proposed approach. Experimental results demonstrate that using DDQN-PER with optimized hyper-parameters effectively solves the JSSP.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"183 ","pages":"Article 107190"},"PeriodicalIF":4.3000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Operations Research","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0305054825002187","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

The increasing demand for mass customization and intensifying market competition have made the production cycles shorter and the product iterations faster. As a result, most products are in small and medium batches, introducing both opportunities and challenges to conventional job scheduling. The resilient production control in flexible manufacturing system focuses on creating adaptive and sustainable systems that present the characteristics of multi-product-type variant-volume discrete-flow mixed-flow production, which is still challenging to balance flexibility and efficiency in production. In this paper, a Deep Dual-Q Network with Prioritized Experience Replay (DDQN-PER) approach is proposed to solve the job shop scheduling problem (JSSP). It combines the advantages of the Dueling and Double DQN architecture, utilizing prioritized replay and neural networks to approximate state–action (Q). To extract and store experience data from the experience memory more efficiently, the states of shop environment are represented as information matrices. The two-phase algorithm, comprising iterative offline training and online application (OTOA), trains scheduling policies, forming a dynamic closed loop between offline scheduling results and online real-time production control. Case study and downtime experiments conducted on key machines validate the superiority of the proposed approach. Experimental results demonstrate that using DDQN-PER with optimized hyper-parameters effectively solves the JSSP.
基于深度双q网络的柔性制造系统弹性生产调度
大规模定制需求的增加和市场竞争的加剧使得生产周期缩短,产品迭代速度加快。因此,大多数产品都是中小批量的,这给传统的作业调度带来了机遇和挑战。柔性制造系统的弹性生产控制侧重于创建具有多产品类型、变体积、离散流、混流生产特征的适应性和可持续性系统,但如何平衡生产的灵活性和效率仍然是一个挑战。提出了一种具有优先体验重放(DDQN-PER)的深度双q网络方法来解决作业车间调度问题(JSSP)。它结合了Dueling和Double DQN架构的优点,利用优先级重播和神经网络来近似状态-行为(Q)。为了更有效地从经验记忆中提取和存储经验数据,将商店环境的状态表示为信息矩阵。该算法采用迭代式离线训练与在线应用(OTOA)两阶段算法,对调度策略进行训练,在离线调度结果与在线实时生产控制之间形成动态闭环。在关键机器上进行的案例研究和停机试验验证了该方法的优越性。实验结果表明,采用超参数优化后的DDQN-PER算法可以有效地解决JSSP问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers & Operations Research
Computers & Operations Research 工程技术-工程:工业
CiteScore
8.60
自引率
8.70%
发文量
292
审稿时长
8.5 months
期刊介绍: Operations research and computers meet in a large number of scientific fields, many of which are of vital current concern to our troubled society. These include, among others, ecology, transportation, safety, reliability, urban planning, economics, inventory control, investment strategy and logistics (including reverse logistics). Computers & Operations Research provides an international forum for the application of computers and operations research techniques to problems in these and related fields.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信