Composite rules selection using reinforcement learning for dynamic job-shop scheduling

Yingzi Wei, Mingyang Zhao
{"title":"Composite rules selection using reinforcement learning for dynamic job-shop scheduling","authors":"Yingzi Wei, Mingyang Zhao","doi":"10.1109/RAMECH.2004.1438070","DOIUrl":null,"url":null,"abstract":"Dispatching rules are usually applied dynamically to schedule the job in the dynamic job-shop. Existing scheduling approaches seldom address the machine selection in the scheduling process. Following the principles of traditional dispatching rules, composite rules, considering both the machine selection and job selection, were proposed in this paper. Reinforcement learning (IRL) is an on-line actor critic method. The dynamic system is trained to enhance its learning and adaptive capability by a RL algorithm. We define the conception of pressure for describing the system feature and determining the state sequence of search space. Designing a reward function should be guided based on the scheduling goal. We present the conception of jobs' estimated mean lateness (EMLT) that is used to determine the amount of reward or penalty. The scheduling system is trained by Q-learning algorithm through the learning stage and then it successively schedules the operations. Competitive results with the RL-agent approach suggest that it can be used as real-time optimal scheduling technology.","PeriodicalId":252964,"journal":{"name":"IEEE Conference on Robotics, Automation and Mechatronics, 2004.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Conference on Robotics, Automation and Mechatronics, 2004.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RAMECH.2004.1438070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Dispatching rules are usually applied dynamically to schedule the job in the dynamic job-shop. Existing scheduling approaches seldom address the machine selection in the scheduling process. Following the principles of traditional dispatching rules, composite rules, considering both the machine selection and job selection, were proposed in this paper. Reinforcement learning (IRL) is an on-line actor critic method. The dynamic system is trained to enhance its learning and adaptive capability by a RL algorithm. We define the conception of pressure for describing the system feature and determining the state sequence of search space. Designing a reward function should be guided based on the scheduling goal. We present the conception of jobs' estimated mean lateness (EMLT) that is used to determine the amount of reward or penalty. The scheduling system is trained by Q-learning algorithm through the learning stage and then it successively schedules the operations. Competitive results with the RL-agent approach suggest that it can be used as real-time optimal scheduling technology.
基于强化学习的车间动态调度组合规则选择
调度规则通常动态应用于调度动态作业车间中的作业。现有的调度方法很少解决调度过程中的机器选择问题。在传统调度规则的基础上,提出了综合考虑机器选择和作业选择的复合调度规则。强化学习(IRL)是一种在线演员评价方法。通过强化学习算法对动态系统进行训练,增强其学习和自适应能力。为了描述系统特征和确定搜索空间的状态序列,我们定义了压力的概念。奖励功能的设计应以调度目标为指导。我们提出了工作的估计平均迟到(EMLT)的概念,用于确定奖励或惩罚的数量。调度系统通过学习阶段用q -学习算法进行训练,然后依次调度操作。与RL-agent方法的比较结果表明,该方法可以作为实时最优调度技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信