Junseop Shin, Joonsoo Park, Jaehyun Shim, Jong Min Lee
{"title":"ChemDT: A stochastic decision transformer for chemical process control","authors":"Junseop Shin, Joonsoo Park, Jaehyun Shim, Jong Min Lee","doi":"10.1016/j.compchemeng.2025.109155","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid advancement of industries has complicated process modeling, as conventional model-based control methods struggle with models that inadequately capture system complexities and impose significant computational burdens on their use. Reinforcement learning (RL), which leverages practical operational data instead of explicit models, often adapts better to these complexities. However, RL’s need for extensive online exploration poses potential risks in sensitive environments like chemical processes. To address this, we propose an offline RL approach based on the Decision Transformer (DT) architecture, named ChemDT. ChemDT incorporates stochastic policies with maximum entropy regularization, broadening policy coverage under limited offline data. To mitigate DT’s vulnerability to stochastic environments, we introduce a monitoring variable, <span><math><mi>λ</mi></math></span>, enabling selective responses to significant stochastic events amidst pervasive disturbances. Validated on a Continuous Stirred Tank Reactor (CSTR) and an industrial-scale fed-batch reactor, our approach demonstrates superior control performance compared to other offline RL algorithms.</div></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"199 ","pages":"Article 109155"},"PeriodicalIF":3.9000,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Chemical Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0098135425001590","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
The rapid advancement of industries has complicated process modeling, as conventional model-based control methods struggle with models that inadequately capture system complexities and impose significant computational burdens on their use. Reinforcement learning (RL), which leverages practical operational data instead of explicit models, often adapts better to these complexities. However, RL’s need for extensive online exploration poses potential risks in sensitive environments like chemical processes. To address this, we propose an offline RL approach based on the Decision Transformer (DT) architecture, named ChemDT. ChemDT incorporates stochastic policies with maximum entropy regularization, broadening policy coverage under limited offline data. To mitigate DT’s vulnerability to stochastic environments, we introduce a monitoring variable, , enabling selective responses to significant stochastic events amidst pervasive disturbances. Validated on a Continuous Stirred Tank Reactor (CSTR) and an industrial-scale fed-batch reactor, our approach demonstrates superior control performance compared to other offline RL algorithms.
期刊介绍:
Computers & Chemical Engineering is primarily a journal of record for new developments in the application of computing and systems technology to chemical engineering problems.