{"title":"Dynamic Economic Optimization of a Continuously Stirred Tank Reactor Using Reinforcement Learning","authors":"Derek Machalek, Titus Quah, Kody M. Powell","doi":"10.23919/ACC45564.2020.9147706","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) algorithms are a set of goal-oriented machine learning algorithms that can perform control and optimization in a system. Most RL algorithms do not require any information about the underlying dynamics of the system, they only require input and output information. RL algorithms can therefore be applied to a wide range of systems. This paper explores the use of a custom environment to optimize a problem pertinent to process engineers. In this study the custom environment is a continuously stirred tank reactor (CSTR). The purpose of using a custom environment is to illustrate that any number of systems can readily become RL environments. Three RL algorithms are investigated: deep deterministic policy gradient (DDPG), twin-delayed DDPG (TD3), and proximal policy optimization. They are evaluated based on how they converge to a stable solution and how well they dynamically optimize the economics of the CSTR. All three algorithms perform 98% as well as a first principles model, coupled with a non-linear solver, but only TD3 demonstrates convergence to a stable solution. While itself limited in scope, this paper seeks to further open the door to a coupling between powerful RL algorithms and process systems engineering.","PeriodicalId":288450,"journal":{"name":"2020 American Control Conference (ACC)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 American Control Conference (ACC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ACC45564.2020.9147706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Reinforcement learning (RL) algorithms are a set of goal-oriented machine learning algorithms that can perform control and optimization in a system. Most RL algorithms do not require any information about the underlying dynamics of the system, they only require input and output information. RL algorithms can therefore be applied to a wide range of systems. This paper explores the use of a custom environment to optimize a problem pertinent to process engineers. In this study the custom environment is a continuously stirred tank reactor (CSTR). The purpose of using a custom environment is to illustrate that any number of systems can readily become RL environments. Three RL algorithms are investigated: deep deterministic policy gradient (DDPG), twin-delayed DDPG (TD3), and proximal policy optimization. They are evaluated based on how they converge to a stable solution and how well they dynamically optimize the economics of the CSTR. All three algorithms perform 98% as well as a first principles model, coupled with a non-linear solver, but only TD3 demonstrates convergence to a stable solution. While itself limited in scope, this paper seeks to further open the door to a coupling between powerful RL algorithms and process systems engineering.