Lotte van Hezewijk , Nico P. Dellaert , Willem L. van Jaarsveld
{"title":"Scalable deep reinforcement learning in the non-stationary capacitated lot sizing problem","authors":"Lotte van Hezewijk , Nico P. Dellaert , Willem L. van Jaarsveld","doi":"10.1016/j.ijpe.2025.109601","DOIUrl":null,"url":null,"abstract":"<div><div>Capacitated lot sizing problems in situations with stationary and non-stationary demand (SCLSP) are very common in practice. Solving problems with a large number of items using Deep Reinforcement Learning (DRL) is challenging due to the large action space. This paper proposes a new Markov Decision Process (MDP) formulation to solve this problem, by decomposing the production quantity decisions in a period into sub-decisions, which reduces the action space dramatically. We demonstrate that applying Deep Controlled Learning (DCL) yields policies that outperform the benchmark heuristic as well as a prior DRL implementation. By using the decomposed MDP formulation and DCL method outlined in this paper, we can solve larger problems compared to the previous DRL implementation. Moreover, we adopt a non-stationary demand model for training the policy, which enables us to readily apply the trained policy in dynamic environments when demand changes.</div></div>","PeriodicalId":14287,"journal":{"name":"International Journal of Production Economics","volume":"284 ","pages":"Article 109601"},"PeriodicalIF":9.8000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Production Economics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925527325000866","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0
Abstract
Capacitated lot sizing problems in situations with stationary and non-stationary demand (SCLSP) are very common in practice. Solving problems with a large number of items using Deep Reinforcement Learning (DRL) is challenging due to the large action space. This paper proposes a new Markov Decision Process (MDP) formulation to solve this problem, by decomposing the production quantity decisions in a period into sub-decisions, which reduces the action space dramatically. We demonstrate that applying Deep Controlled Learning (DCL) yields policies that outperform the benchmark heuristic as well as a prior DRL implementation. By using the decomposed MDP formulation and DCL method outlined in this paper, we can solve larger problems compared to the previous DRL implementation. Moreover, we adopt a non-stationary demand model for training the policy, which enables us to readily apply the trained policy in dynamic environments when demand changes.
期刊介绍:
The International Journal of Production Economics focuses on the interface between engineering and management. It covers all aspects of manufacturing and process industries, as well as production in general. The journal is interdisciplinary, considering activities throughout the product life cycle and material flow cycle. It aims to disseminate knowledge for improving industrial practice and strengthening the theoretical base for decision making. The journal serves as a forum for exchanging ideas and presenting new developments in theory and application, combining academic standards with practical value for industrial applications.