{"title":"基于强化学习的车间动态调度组合规则选择","authors":"Yingzi Wei, Mingyang Zhao","doi":"10.1109/RAMECH.2004.1438070","DOIUrl":null,"url":null,"abstract":"Dispatching rules are usually applied dynamically to schedule the job in the dynamic job-shop. Existing scheduling approaches seldom address the machine selection in the scheduling process. Following the principles of traditional dispatching rules, composite rules, considering both the machine selection and job selection, were proposed in this paper. Reinforcement learning (IRL) is an on-line actor critic method. The dynamic system is trained to enhance its learning and adaptive capability by a RL algorithm. We define the conception of pressure for describing the system feature and determining the state sequence of search space. Designing a reward function should be guided based on the scheduling goal. We present the conception of jobs' estimated mean lateness (EMLT) that is used to determine the amount of reward or penalty. The scheduling system is trained by Q-learning algorithm through the learning stage and then it successively schedules the operations. Competitive results with the RL-agent approach suggest that it can be used as real-time optimal scheduling technology.","PeriodicalId":252964,"journal":{"name":"IEEE Conference on Robotics, Automation and Mechatronics, 2004.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Composite rules selection using reinforcement learning for dynamic job-shop scheduling\",\"authors\":\"Yingzi Wei, Mingyang Zhao\",\"doi\":\"10.1109/RAMECH.2004.1438070\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dispatching rules are usually applied dynamically to schedule the job in the dynamic job-shop. Existing scheduling approaches seldom address the machine selection in the scheduling process. Following the principles of traditional dispatching rules, composite rules, considering both the machine selection and job selection, were proposed in this paper. Reinforcement learning (IRL) is an on-line actor critic method. The dynamic system is trained to enhance its learning and adaptive capability by a RL algorithm. We define the conception of pressure for describing the system feature and determining the state sequence of search space. Designing a reward function should be guided based on the scheduling goal. We present the conception of jobs' estimated mean lateness (EMLT) that is used to determine the amount of reward or penalty. The scheduling system is trained by Q-learning algorithm through the learning stage and then it successively schedules the operations. Competitive results with the RL-agent approach suggest that it can be used as real-time optimal scheduling technology.\",\"PeriodicalId\":252964,\"journal\":{\"name\":\"IEEE Conference on Robotics, Automation and Mechatronics, 2004.\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2004-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Conference on Robotics, Automation and Mechatronics, 2004.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RAMECH.2004.1438070\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Conference on Robotics, Automation and Mechatronics, 2004.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RAMECH.2004.1438070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Composite rules selection using reinforcement learning for dynamic job-shop scheduling
Dispatching rules are usually applied dynamically to schedule the job in the dynamic job-shop. Existing scheduling approaches seldom address the machine selection in the scheduling process. Following the principles of traditional dispatching rules, composite rules, considering both the machine selection and job selection, were proposed in this paper. Reinforcement learning (IRL) is an on-line actor critic method. The dynamic system is trained to enhance its learning and adaptive capability by a RL algorithm. We define the conception of pressure for describing the system feature and determining the state sequence of search space. Designing a reward function should be guided based on the scheduling goal. We present the conception of jobs' estimated mean lateness (EMLT) that is used to determine the amount of reward or penalty. The scheduling system is trained by Q-learning algorithm through the learning stage and then it successively schedules the operations. Competitive results with the RL-agent approach suggest that it can be used as real-time optimal scheduling technology.