Real-world implementation of offline reinforcement learning for process control in industrial dividing wall column

IF 3.9 2区 工程技术 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Joonsoo Park , Wonhyeok Choi , Dong Il Kim , Ha El Park , Jong Min Lee
{"title":"Real-world implementation of offline reinforcement learning for process control in industrial dividing wall column","authors":"Joonsoo Park ,&nbsp;Wonhyeok Choi ,&nbsp;Dong Il Kim ,&nbsp;Ha El Park ,&nbsp;Jong Min Lee","doi":"10.1016/j.compchemeng.2025.109383","DOIUrl":null,"url":null,"abstract":"<div><div>Reinforcement Learning (RL) has emerged as a promising approach for automating industrial process control, particularly in handling stochastic disturbances and complex dynamics. However, conventional RL methods pose significant safety concerns in real-world applications due to their reliance on extensive real-time interactions with the environment. Offline RL, which derives an optimal policy solely from historical operational data, provides a safer alternative but remains underexplored in industrial chemical processes. In this study, we apply Calibrated Q-Learning (Cal-QL), an offline-to-online RL algorithm, to temperature control of an industrial dividing wall column (DWC). We propose a practical procedure for deploying offline RL in chemical plants, integrating a Long Short-Term Memory (LSTM) network with a Deep Q-Network (DQN) to effectively process time series data structure and discrete action distributions commonly encountered in plant operations. Extensive simulation studies and real-world experiments on an industrial DWC demonstrate the suitability of the proposed framework. We also highlight the critical role of reward function design in balancing short- and long-term objectives, significantly influencing control performance. Our best performing configuration achieved stable temperature control with a high automation ratio of 93.11%, underscoring the feasibility and practical effectiveness of offline RL for complex industrial plant operations.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"204 ","pages":"Article 109383"},"PeriodicalIF":3.9000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Chemical Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0098135425003862","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Reinforcement Learning (RL) has emerged as a promising approach for automating industrial process control, particularly in handling stochastic disturbances and complex dynamics. However, conventional RL methods pose significant safety concerns in real-world applications due to their reliance on extensive real-time interactions with the environment. Offline RL, which derives an optimal policy solely from historical operational data, provides a safer alternative but remains underexplored in industrial chemical processes. In this study, we apply Calibrated Q-Learning (Cal-QL), an offline-to-online RL algorithm, to temperature control of an industrial dividing wall column (DWC). We propose a practical procedure for deploying offline RL in chemical plants, integrating a Long Short-Term Memory (LSTM) network with a Deep Q-Network (DQN) to effectively process time series data structure and discrete action distributions commonly encountered in plant operations. Extensive simulation studies and real-world experiments on an industrial DWC demonstrate the suitability of the proposed framework. We also highlight the critical role of reward function design in balancing short- and long-term objectives, significantly influencing control performance. Our best performing configuration achieved stable temperature control with a high automation ratio of 93.11%, underscoring the feasibility and practical effectiveness of offline RL for complex industrial plant operations.
离线强化学习在工业分界墙柱过程控制中的实际实现
强化学习(RL)已成为自动化工业过程控制的一种有前途的方法,特别是在处理随机干扰和复杂动态方面。然而,传统的RL方法由于依赖于与环境的广泛实时交互,在实际应用中存在重大的安全问题。离线强化学习仅从历史操作数据中获得最佳策略,提供了更安全的替代方案,但在工业化学过程中仍未得到充分探索。在这项研究中,我们将校准q - ql (Cal-QL),一种离线到在线的RL算法,应用于工业分壁柱(DWC)的温度控制。我们提出了一个在化工厂部署离线强化学习的实用程序,将长短期记忆(LSTM)网络与深度q -网络(DQN)集成在一起,以有效处理工厂操作中常见的时间序列数据结构和离散动作分布。广泛的仿真研究和工业DWC的实际实验证明了所提出框架的适用性。我们还强调了奖励功能设计在平衡短期和长期目标方面的关键作用,它会显著影响控制绩效。我们的最佳配置实现了稳定的温度控制,自动化率高达93.11%,强调了离线RL在复杂工业工厂操作中的可行性和实际有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers & Chemical Engineering
Computers & Chemical Engineering 工程技术-工程:化工
CiteScore
8.70
自引率
14.00%
发文量
374
审稿时长
70 days
期刊介绍: Computers & Chemical Engineering is primarily a journal of record for new developments in the application of computing and systems technology to chemical engineering problems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信