{"title":"Reinforcement learning based automatic tuning of PID controllers in multivariable grinding mill circuits","authors":"J.A. van Niekerk , J.D. le Roux , I.K. Craig","doi":"10.1016/j.conengprac.2025.106522","DOIUrl":null,"url":null,"abstract":"<div><div>Process controllers are extensively utilised in industry and necessitate precise tuning to ensure optimal performance. While tuning controllers through the basic trial-and-error method is possible, this approach typically leads to suboptimal results unless performed by an expert. This study investigates the use of reinforcement learning (RL) for the automatic tuning of proportional–integral–derivative (PID) controllers that control a grinding mill circuit represented by a multivariable nonlinear plant model which was verified using industrial data. By employing the proximal policy optimisation (PPO) algorithm, the RL agent adjusts the controller parameters to enhance closed-loop performance. The problem is formulated to maximise a reward function specifically designed to achieve the desired controller performance. Agent actions are analytically constrained to minimise the risk of closed-loop instability and unsafe behaviours during training. The simulation results indicate that the automatically tuned controller outperforms the manually tuned controller in setpoint tracking. The proposed approach presents a promising solution for real-time controller tuning in industrial processes, potentially increasing productivity and product quality while reducing the need for manual intervention. This research contributes to the field by establishing a robust framework for applying RL in process control, designing effective reward functions, constraining the agent to a safe operational space, and demonstrating its potential to address the challenges associated with PID controller tuning in grinding mill circuits.</div></div>","PeriodicalId":50615,"journal":{"name":"Control Engineering Practice","volume":"165 ","pages":"Article 106522"},"PeriodicalIF":4.6000,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Control Engineering Practice","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0967066125002849","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Process controllers are extensively utilised in industry and necessitate precise tuning to ensure optimal performance. While tuning controllers through the basic trial-and-error method is possible, this approach typically leads to suboptimal results unless performed by an expert. This study investigates the use of reinforcement learning (RL) for the automatic tuning of proportional–integral–derivative (PID) controllers that control a grinding mill circuit represented by a multivariable nonlinear plant model which was verified using industrial data. By employing the proximal policy optimisation (PPO) algorithm, the RL agent adjusts the controller parameters to enhance closed-loop performance. The problem is formulated to maximise a reward function specifically designed to achieve the desired controller performance. Agent actions are analytically constrained to minimise the risk of closed-loop instability and unsafe behaviours during training. The simulation results indicate that the automatically tuned controller outperforms the manually tuned controller in setpoint tracking. The proposed approach presents a promising solution for real-time controller tuning in industrial processes, potentially increasing productivity and product quality while reducing the need for manual intervention. This research contributes to the field by establishing a robust framework for applying RL in process control, designing effective reward functions, constraining the agent to a safe operational space, and demonstrating its potential to address the challenges associated with PID controller tuning in grinding mill circuits.
期刊介绍:
Control Engineering Practice strives to meet the needs of industrial practitioners and industrially related academics and researchers. It publishes papers which illustrate the direct application of control theory and its supporting tools in all possible areas of automation. As a result, the journal only contains papers which can be considered to have made significant contributions to the application of advanced control techniques. It is normally expected that practical results should be included, but where simulation only studies are available, it is necessary to demonstrate that the simulation model is representative of a genuine application. Strictly theoretical papers will find a more appropriate home in Control Engineering Practice''s sister publication, Automatica. It is also expected that papers are innovative with respect to the state of the art and are sufficiently detailed for a reader to be able to duplicate the main results of the paper (supplementary material, including datasets, tables, code and any relevant interactive material can be made available and downloaded from the website). The benefits of the presented methods must be made very clear and the new techniques must be compared and contrasted with results obtained using existing methods. Moreover, a thorough analysis of failures that may happen in the design process and implementation can also be part of the paper.
The scope of Control Engineering Practice matches the activities of IFAC.
Papers demonstrating the contribution of automation and control in improving the performance, quality, productivity, sustainability, resource and energy efficiency, and the manageability of systems and processes for the benefit of mankind and are relevant to industrial practitioners are most welcome.