{"title":"用于控制 CSTR 工艺的强化学习技术比较","authors":"Eric Monteiro L. Luz, Wouter Caarls","doi":"10.1007/s43153-023-00422-y","DOIUrl":null,"url":null,"abstract":"<p>One of the main promises of Industry 4.0 is the incorporation of computational intelligence techniques in industrial process control. For the chemical industry, the efficiency of the control strategy can reduce the production of effluents and the consumption of raw materials and energy. A possible, although currently underutilized approach is reinforcement learning (RL), which can be used to optimize many sequential decision making processes through training. This work used Van de Vusse kinetics as an evaluation environment for controllers based on reinforcement learning and comparison with conventional solutions like non-linear model predictive control (NMPC). These kinetics contain characteristics that make it difficult for classic controllers such as PID to handle, such as its non-linearity and inversion point. The investigated algorithms showed excellent results for this notable chemical process control benchmark. This study was divided into two experiments: setpoint change and operation around the inversion point. The former showed the ability of RL controllers to adjust the controlled variable and simultaneously maximize production. The latter revealed the excellent management capability of the reinforcement learning algorithms and NMPC at the inversion point. In this study, the RL algorithms performed similar to NMPC but with lower computational cost after training.</p>","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":null,"pages":null},"PeriodicalIF":16.4000,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparison of reinforcement learning techniques for controlling a CSTR process\",\"authors\":\"Eric Monteiro L. Luz, Wouter Caarls\",\"doi\":\"10.1007/s43153-023-00422-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>One of the main promises of Industry 4.0 is the incorporation of computational intelligence techniques in industrial process control. For the chemical industry, the efficiency of the control strategy can reduce the production of effluents and the consumption of raw materials and energy. A possible, although currently underutilized approach is reinforcement learning (RL), which can be used to optimize many sequential decision making processes through training. This work used Van de Vusse kinetics as an evaluation environment for controllers based on reinforcement learning and comparison with conventional solutions like non-linear model predictive control (NMPC). These kinetics contain characteristics that make it difficult for classic controllers such as PID to handle, such as its non-linearity and inversion point. The investigated algorithms showed excellent results for this notable chemical process control benchmark. This study was divided into two experiments: setpoint change and operation around the inversion point. The former showed the ability of RL controllers to adjust the controlled variable and simultaneously maximize production. The latter revealed the excellent management capability of the reinforcement learning algorithms and NMPC at the inversion point. In this study, the RL algorithms performed similar to NMPC but with lower computational cost after training.</p>\",\"PeriodicalId\":1,\"journal\":{\"name\":\"Accounts of Chemical Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":16.4000,\"publicationDate\":\"2023-12-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Accounts of Chemical Research\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s43153-023-00422-y\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s43153-023-00422-y","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
摘要
工业 4.0 的主要承诺之一是在工业过程控制中采用计算智能技术。对于化工行业来说,控制策略的效率可以减少废水的产生以及原材料和能源的消耗。强化学习(RL)是一种可行的方法,虽然目前还未得到充分利用,但它可以通过训练来优化许多连续的决策过程。这项工作使用 Van de Vusse 动力学作为基于强化学习的控制器的评估环境,并与非线性模型预测控制(NMPC)等传统解决方案进行比较。这些动力学特性使 PID 等传统控制器难以处理,如非线性和反转点。针对这一著名的化学过程控制基准,所研究的算法显示出卓越的效果。这项研究分为两个实验:设定点变化和反转点附近的操作。前者显示了 RL 控制器在调整受控变量的同时最大限度提高产量的能力。后者显示了强化学习算法和 NMPC 在反转点的出色管理能力。在这项研究中,RL 算法的性能与 NMPC 相似,但训练后的计算成本更低。
Comparison of reinforcement learning techniques for controlling a CSTR process
One of the main promises of Industry 4.0 is the incorporation of computational intelligence techniques in industrial process control. For the chemical industry, the efficiency of the control strategy can reduce the production of effluents and the consumption of raw materials and energy. A possible, although currently underutilized approach is reinforcement learning (RL), which can be used to optimize many sequential decision making processes through training. This work used Van de Vusse kinetics as an evaluation environment for controllers based on reinforcement learning and comparison with conventional solutions like non-linear model predictive control (NMPC). These kinetics contain characteristics that make it difficult for classic controllers such as PID to handle, such as its non-linearity and inversion point. The investigated algorithms showed excellent results for this notable chemical process control benchmark. This study was divided into two experiments: setpoint change and operation around the inversion point. The former showed the ability of RL controllers to adjust the controlled variable and simultaneously maximize production. The latter revealed the excellent management capability of the reinforcement learning algorithms and NMPC at the inversion point. In this study, the RL algorithms performed similar to NMPC but with lower computational cost after training.
期刊介绍:
Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance.
Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.