多变量磨机电路中基于强化学习的PID控制器自动整定

IF 4.6 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
J.A. van Niekerk , J.D. le Roux , I.K. Craig
{"title":"多变量磨机电路中基于强化学习的PID控制器自动整定","authors":"J.A. van Niekerk ,&nbsp;J.D. le Roux ,&nbsp;I.K. Craig","doi":"10.1016/j.conengprac.2025.106522","DOIUrl":null,"url":null,"abstract":"<div><div>Process controllers are extensively utilised in industry and necessitate precise tuning to ensure optimal performance. While tuning controllers through the basic trial-and-error method is possible, this approach typically leads to suboptimal results unless performed by an expert. This study investigates the use of reinforcement learning (RL) for the automatic tuning of proportional–integral–derivative (PID) controllers that control a grinding mill circuit represented by a multivariable nonlinear plant model which was verified using industrial data. By employing the proximal policy optimisation (PPO) algorithm, the RL agent adjusts the controller parameters to enhance closed-loop performance. The problem is formulated to maximise a reward function specifically designed to achieve the desired controller performance. Agent actions are analytically constrained to minimise the risk of closed-loop instability and unsafe behaviours during training. The simulation results indicate that the automatically tuned controller outperforms the manually tuned controller in setpoint tracking. The proposed approach presents a promising solution for real-time controller tuning in industrial processes, potentially increasing productivity and product quality while reducing the need for manual intervention. This research contributes to the field by establishing a robust framework for applying RL in process control, designing effective reward functions, constraining the agent to a safe operational space, and demonstrating its potential to address the challenges associated with PID controller tuning in grinding mill circuits.</div></div>","PeriodicalId":50615,"journal":{"name":"Control Engineering Practice","volume":"165 ","pages":"Article 106522"},"PeriodicalIF":4.6000,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning based automatic tuning of PID controllers in multivariable grinding mill circuits\",\"authors\":\"J.A. van Niekerk ,&nbsp;J.D. le Roux ,&nbsp;I.K. Craig\",\"doi\":\"10.1016/j.conengprac.2025.106522\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Process controllers are extensively utilised in industry and necessitate precise tuning to ensure optimal performance. While tuning controllers through the basic trial-and-error method is possible, this approach typically leads to suboptimal results unless performed by an expert. This study investigates the use of reinforcement learning (RL) for the automatic tuning of proportional–integral–derivative (PID) controllers that control a grinding mill circuit represented by a multivariable nonlinear plant model which was verified using industrial data. By employing the proximal policy optimisation (PPO) algorithm, the RL agent adjusts the controller parameters to enhance closed-loop performance. The problem is formulated to maximise a reward function specifically designed to achieve the desired controller performance. Agent actions are analytically constrained to minimise the risk of closed-loop instability and unsafe behaviours during training. The simulation results indicate that the automatically tuned controller outperforms the manually tuned controller in setpoint tracking. The proposed approach presents a promising solution for real-time controller tuning in industrial processes, potentially increasing productivity and product quality while reducing the need for manual intervention. This research contributes to the field by establishing a robust framework for applying RL in process control, designing effective reward functions, constraining the agent to a safe operational space, and demonstrating its potential to address the challenges associated with PID controller tuning in grinding mill circuits.</div></div>\",\"PeriodicalId\":50615,\"journal\":{\"name\":\"Control Engineering Practice\",\"volume\":\"165 \",\"pages\":\"Article 106522\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Control Engineering Practice\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0967066125002849\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Control Engineering Practice","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0967066125002849","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

过程控制器在工业中广泛使用,需要精确调谐以确保最佳性能。虽然可以通过基本的试错方法调优控制器,但除非由专家执行,否则这种方法通常会导致次优结果。本研究探讨了使用强化学习(RL)对比例-积分-导数(PID)控制器进行自动调谐,该控制器控制由多变量非线性工厂模型表示的磨机电路,该模型使用工业数据进行验证。RL智能体通过采用近端策略优化(PPO)算法来调整控制器参数以增强闭环性能。这个问题是为了最大化特别设计的奖励函数,以实现期望的控制器性能。在训练过程中,智能体的行为被分析约束以最小化闭环不稳定和不安全行为的风险。仿真结果表明,自动整定控制器在定点跟踪方面优于手动整定控制器。所提出的方法为工业过程中的实时控制器调谐提供了一种有前途的解决方案,可以在减少人工干预的同时潜在地提高生产率和产品质量。本研究通过建立一个将强化学习应用于过程控制的鲁棒框架,设计有效的奖励函数,将智能体限制在安全的操作空间,并展示其解决磨机电路中PID控制器调谐相关挑战的潜力,为该领域做出了贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reinforcement learning based automatic tuning of PID controllers in multivariable grinding mill circuits
Process controllers are extensively utilised in industry and necessitate precise tuning to ensure optimal performance. While tuning controllers through the basic trial-and-error method is possible, this approach typically leads to suboptimal results unless performed by an expert. This study investigates the use of reinforcement learning (RL) for the automatic tuning of proportional–integral–derivative (PID) controllers that control a grinding mill circuit represented by a multivariable nonlinear plant model which was verified using industrial data. By employing the proximal policy optimisation (PPO) algorithm, the RL agent adjusts the controller parameters to enhance closed-loop performance. The problem is formulated to maximise a reward function specifically designed to achieve the desired controller performance. Agent actions are analytically constrained to minimise the risk of closed-loop instability and unsafe behaviours during training. The simulation results indicate that the automatically tuned controller outperforms the manually tuned controller in setpoint tracking. The proposed approach presents a promising solution for real-time controller tuning in industrial processes, potentially increasing productivity and product quality while reducing the need for manual intervention. This research contributes to the field by establishing a robust framework for applying RL in process control, designing effective reward functions, constraining the agent to a safe operational space, and demonstrating its potential to address the challenges associated with PID controller tuning in grinding mill circuits.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Control Engineering Practice
Control Engineering Practice 工程技术-工程:电子与电气
CiteScore
9.20
自引率
12.20%
发文量
183
审稿时长
44 days
期刊介绍: Control Engineering Practice strives to meet the needs of industrial practitioners and industrially related academics and researchers. It publishes papers which illustrate the direct application of control theory and its supporting tools in all possible areas of automation. As a result, the journal only contains papers which can be considered to have made significant contributions to the application of advanced control techniques. It is normally expected that practical results should be included, but where simulation only studies are available, it is necessary to demonstrate that the simulation model is representative of a genuine application. Strictly theoretical papers will find a more appropriate home in Control Engineering Practice''s sister publication, Automatica. It is also expected that papers are innovative with respect to the state of the art and are sufficiently detailed for a reader to be able to duplicate the main results of the paper (supplementary material, including datasets, tables, code and any relevant interactive material can be made available and downloaded from the website). The benefits of the presented methods must be made very clear and the new techniques must be compared and contrasted with results obtained using existing methods. Moreover, a thorough analysis of failures that may happen in the design process and implementation can also be part of the paper. The scope of Control Engineering Practice matches the activities of IFAC. Papers demonstrating the contribution of automation and control in improving the performance, quality, productivity, sustainability, resource and energy efficiency, and the manageability of systems and processes for the benefit of mankind and are relevant to industrial practitioners are most welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信