{"title":"通过多代理强化学习与奖励塑造实现动态灵活的作业车间调度","authors":"Lixiang Zhang , Yan Yan , Chen Yang , Yaoguang Hu","doi":"10.1016/j.aei.2024.102872","DOIUrl":null,"url":null,"abstract":"<div><div>Achieving mass personalization presents significant challenges in performance and adaptability when solving dynamic flexible job-shop scheduling problems (DFJSP). Previous studies have struggled to achieve high performance in variable contexts. To tackle this challenge, this paper introduces a novel scheduling strategy founded on heterogeneous multi-agent reinforcement learning. This strategy facilitates centralized optimization and decentralized decision-making through collaboration among job and machine agents while employing historical experiences to support data-driven learning. The DFJSP with transportation time is initially formulated as heterogeneous multi-agent partial observation Markov Decision Processes. This formulation outlines the interactions between decision-making agents and the environment, incorporating a reward-shaping mechanism aimed at organizing job and machine agents to minimize the weighted tardiness of dynamic jobs. Then, we develop a dueling double deep Q-network algorithm incorporating the reward-shaping mechanism to ascertain the optimal strategies for machine allocation and job sequencing in DFJSP. This approach addresses the sparse reward issue and accelerates the learning process. Finally, the efficiency of the proposed method is verified and validated through numerical experiments, which demonstrate its superiority in reducing the weighted tardiness of dynamic jobs when compared to state-of-the-art baselines. The proposed method exhibits remarkable adaptability in encountering new scenarios, underscoring the benefits of adopting a heterogeneous multi-agent reinforcement learning-based scheduling approach in navigating dynamic and flexible challenges.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"62 ","pages":"Article 102872"},"PeriodicalIF":8.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dynamic flexible job-shop scheduling by multi-agent reinforcement learning with reward-shaping\",\"authors\":\"Lixiang Zhang , Yan Yan , Chen Yang , Yaoguang Hu\",\"doi\":\"10.1016/j.aei.2024.102872\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Achieving mass personalization presents significant challenges in performance and adaptability when solving dynamic flexible job-shop scheduling problems (DFJSP). Previous studies have struggled to achieve high performance in variable contexts. To tackle this challenge, this paper introduces a novel scheduling strategy founded on heterogeneous multi-agent reinforcement learning. This strategy facilitates centralized optimization and decentralized decision-making through collaboration among job and machine agents while employing historical experiences to support data-driven learning. The DFJSP with transportation time is initially formulated as heterogeneous multi-agent partial observation Markov Decision Processes. This formulation outlines the interactions between decision-making agents and the environment, incorporating a reward-shaping mechanism aimed at organizing job and machine agents to minimize the weighted tardiness of dynamic jobs. Then, we develop a dueling double deep Q-network algorithm incorporating the reward-shaping mechanism to ascertain the optimal strategies for machine allocation and job sequencing in DFJSP. This approach addresses the sparse reward issue and accelerates the learning process. Finally, the efficiency of the proposed method is verified and validated through numerical experiments, which demonstrate its superiority in reducing the weighted tardiness of dynamic jobs when compared to state-of-the-art baselines. The proposed method exhibits remarkable adaptability in encountering new scenarios, underscoring the benefits of adopting a heterogeneous multi-agent reinforcement learning-based scheduling approach in navigating dynamic and flexible challenges.</div></div>\",\"PeriodicalId\":50941,\"journal\":{\"name\":\"Advanced Engineering Informatics\",\"volume\":\"62 \",\"pages\":\"Article 102872\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advanced Engineering Informatics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1474034624005202\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034624005202","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Dynamic flexible job-shop scheduling by multi-agent reinforcement learning with reward-shaping
Achieving mass personalization presents significant challenges in performance and adaptability when solving dynamic flexible job-shop scheduling problems (DFJSP). Previous studies have struggled to achieve high performance in variable contexts. To tackle this challenge, this paper introduces a novel scheduling strategy founded on heterogeneous multi-agent reinforcement learning. This strategy facilitates centralized optimization and decentralized decision-making through collaboration among job and machine agents while employing historical experiences to support data-driven learning. The DFJSP with transportation time is initially formulated as heterogeneous multi-agent partial observation Markov Decision Processes. This formulation outlines the interactions between decision-making agents and the environment, incorporating a reward-shaping mechanism aimed at organizing job and machine agents to minimize the weighted tardiness of dynamic jobs. Then, we develop a dueling double deep Q-network algorithm incorporating the reward-shaping mechanism to ascertain the optimal strategies for machine allocation and job sequencing in DFJSP. This approach addresses the sparse reward issue and accelerates the learning process. Finally, the efficiency of the proposed method is verified and validated through numerical experiments, which demonstrate its superiority in reducing the weighted tardiness of dynamic jobs when compared to state-of-the-art baselines. The proposed method exhibits remarkable adaptability in encountering new scenarios, underscoring the benefits of adopting a heterogeneous multi-agent reinforcement learning-based scheduling approach in navigating dynamic and flexible challenges.
期刊介绍:
Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.