Joonsoo Park , Wonhyeok Choi , Dong Il Kim , Ha El Park , Jong Min Lee
{"title":"离线强化学习在工业分界墙柱过程控制中的实际实现","authors":"Joonsoo Park , Wonhyeok Choi , Dong Il Kim , Ha El Park , Jong Min Lee","doi":"10.1016/j.compchemeng.2025.109383","DOIUrl":null,"url":null,"abstract":"<div><div>Reinforcement Learning (RL) has emerged as a promising approach for automating industrial process control, particularly in handling stochastic disturbances and complex dynamics. However, conventional RL methods pose significant safety concerns in real-world applications due to their reliance on extensive real-time interactions with the environment. Offline RL, which derives an optimal policy solely from historical operational data, provides a safer alternative but remains underexplored in industrial chemical processes. In this study, we apply Calibrated Q-Learning (Cal-QL), an offline-to-online RL algorithm, to temperature control of an industrial dividing wall column (DWC). We propose a practical procedure for deploying offline RL in chemical plants, integrating a Long Short-Term Memory (LSTM) network with a Deep Q-Network (DQN) to effectively process time series data structure and discrete action distributions commonly encountered in plant operations. Extensive simulation studies and real-world experiments on an industrial DWC demonstrate the suitability of the proposed framework. We also highlight the critical role of reward function design in balancing short- and long-term objectives, significantly influencing control performance. Our best performing configuration achieved stable temperature control with a high automation ratio of 93.11%, underscoring the feasibility and practical effectiveness of offline RL for complex industrial plant operations.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109383"},"PeriodicalIF":3.9000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Real-world implementation of offline reinforcement learning for process control in industrial dividing wall column\",\"authors\":\"Joonsoo Park , Wonhyeok Choi , Dong Il Kim , Ha El Park , Jong Min Lee\",\"doi\":\"10.1016/j.compchemeng.2025.109383\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Reinforcement Learning (RL) has emerged as a promising approach for automating industrial process control, particularly in handling stochastic disturbances and complex dynamics. However, conventional RL methods pose significant safety concerns in real-world applications due to their reliance on extensive real-time interactions with the environment. Offline RL, which derives an optimal policy solely from historical operational data, provides a safer alternative but remains underexplored in industrial chemical processes. In this study, we apply Calibrated Q-Learning (Cal-QL), an offline-to-online RL algorithm, to temperature control of an industrial dividing wall column (DWC). We propose a practical procedure for deploying offline RL in chemical plants, integrating a Long Short-Term Memory (LSTM) network with a Deep Q-Network (DQN) to effectively process time series data structure and discrete action distributions commonly encountered in plant operations. Extensive simulation studies and real-world experiments on an industrial DWC demonstrate the suitability of the proposed framework. We also highlight the critical role of reward function design in balancing short- and long-term objectives, significantly influencing control performance. Our best performing configuration achieved stable temperature control with a high automation ratio of 93.11%, underscoring the feasibility and practical effectiveness of offline RL for complex industrial plant operations.</div></div>\",\"PeriodicalId\":286,\"journal\":{\"name\":\"Computers & Chemical Engineering\",\"volume\":\"204 \",\"pages\":\"Article 109383\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Chemical Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0098135425003862\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Chemical Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0098135425003862","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Real-world implementation of offline reinforcement learning for process control in industrial dividing wall column
Reinforcement Learning (RL) has emerged as a promising approach for automating industrial process control, particularly in handling stochastic disturbances and complex dynamics. However, conventional RL methods pose significant safety concerns in real-world applications due to their reliance on extensive real-time interactions with the environment. Offline RL, which derives an optimal policy solely from historical operational data, provides a safer alternative but remains underexplored in industrial chemical processes. In this study, we apply Calibrated Q-Learning (Cal-QL), an offline-to-online RL algorithm, to temperature control of an industrial dividing wall column (DWC). We propose a practical procedure for deploying offline RL in chemical plants, integrating a Long Short-Term Memory (LSTM) network with a Deep Q-Network (DQN) to effectively process time series data structure and discrete action distributions commonly encountered in plant operations. Extensive simulation studies and real-world experiments on an industrial DWC demonstrate the suitability of the proposed framework. We also highlight the critical role of reward function design in balancing short- and long-term objectives, significantly influencing control performance. Our best performing configuration achieved stable temperature control with a high automation ratio of 93.11%, underscoring the feasibility and practical effectiveness of offline RL for complex industrial plant operations.
期刊介绍:
Computers & Chemical Engineering is primarily a journal of record for new developments in the application of computing and systems technology to chemical engineering problems.