{"title":"加强实时优化的分层学习和运行性能的模型预测控制","authors":"Rui Ren, Shaoyuan Li","doi":"10.1016/j.jprocont.2025.103559","DOIUrl":null,"url":null,"abstract":"<div><div>In process control, the integration of Real-Time Optimization (RTO) and Model Predictive Control (MPC) enables the system to achieve optimal control over both long-term and short-term horizons, thereby enhancing operational efficiency and economic performance. However, this integration still faces several challenges. In the two-layer structure, the upper layer RTO involves solving nonlinear programming problems with significant computational complexity, making it difficult to obtain feasible solutions in real-time within the limited optimization horizon. Simultaneously, the lower layer MPC must solve rolling optimization problems within a constrained time frame, placing higher demands on real-time performance. Additionally, uncertainties in the system affect both optimization and control performance. To address these issues, this paper proposes a noval hierarchical learning approach for RTO and MPC controller using reinforcement learning. This method learns the optimal strategies for RTO and MPC across different time scales, effectively mitigating the high computational costs associated with online computations. Through reward design and experience replay during the hierarchical learning process, efficient training of the upper and lower layer strategies is achieved. Offline training under various uncertainty scenarios, combined with online learning, effectively reduces performance degradation due to model uncertainties. The proposed approach demonstrates excellent performance in two representative chemical engineering case studies.</div></div>","PeriodicalId":50079,"journal":{"name":"Journal of Process Control","volume":"155 ","pages":"Article 103559"},"PeriodicalIF":3.9000,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing hierarchical learning of real-time optimization and model predictive control for operational performance\",\"authors\":\"Rui Ren, Shaoyuan Li\",\"doi\":\"10.1016/j.jprocont.2025.103559\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In process control, the integration of Real-Time Optimization (RTO) and Model Predictive Control (MPC) enables the system to achieve optimal control over both long-term and short-term horizons, thereby enhancing operational efficiency and economic performance. However, this integration still faces several challenges. In the two-layer structure, the upper layer RTO involves solving nonlinear programming problems with significant computational complexity, making it difficult to obtain feasible solutions in real-time within the limited optimization horizon. Simultaneously, the lower layer MPC must solve rolling optimization problems within a constrained time frame, placing higher demands on real-time performance. Additionally, uncertainties in the system affect both optimization and control performance. To address these issues, this paper proposes a noval hierarchical learning approach for RTO and MPC controller using reinforcement learning. This method learns the optimal strategies for RTO and MPC across different time scales, effectively mitigating the high computational costs associated with online computations. Through reward design and experience replay during the hierarchical learning process, efficient training of the upper and lower layer strategies is achieved. Offline training under various uncertainty scenarios, combined with online learning, effectively reduces performance degradation due to model uncertainties. The proposed approach demonstrates excellent performance in two representative chemical engineering case studies.</div></div>\",\"PeriodicalId\":50079,\"journal\":{\"name\":\"Journal of Process Control\",\"volume\":\"155 \",\"pages\":\"Article 103559\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Process Control\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0959152425001878\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Process Control","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0959152425001878","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Enhancing hierarchical learning of real-time optimization and model predictive control for operational performance
In process control, the integration of Real-Time Optimization (RTO) and Model Predictive Control (MPC) enables the system to achieve optimal control over both long-term and short-term horizons, thereby enhancing operational efficiency and economic performance. However, this integration still faces several challenges. In the two-layer structure, the upper layer RTO involves solving nonlinear programming problems with significant computational complexity, making it difficult to obtain feasible solutions in real-time within the limited optimization horizon. Simultaneously, the lower layer MPC must solve rolling optimization problems within a constrained time frame, placing higher demands on real-time performance. Additionally, uncertainties in the system affect both optimization and control performance. To address these issues, this paper proposes a noval hierarchical learning approach for RTO and MPC controller using reinforcement learning. This method learns the optimal strategies for RTO and MPC across different time scales, effectively mitigating the high computational costs associated with online computations. Through reward design and experience replay during the hierarchical learning process, efficient training of the upper and lower layer strategies is achieved. Offline training under various uncertainty scenarios, combined with online learning, effectively reduces performance degradation due to model uncertainties. The proposed approach demonstrates excellent performance in two representative chemical engineering case studies.
期刊介绍:
This international journal covers the application of control theory, operations research, computer science and engineering principles to the solution of process control problems. In addition to the traditional chemical processing and manufacturing applications, the scope of process control problems involves a wide range of applications that includes energy processes, nano-technology, systems biology, bio-medical engineering, pharmaceutical processing technology, energy storage and conversion, smart grid, and data analytics among others.
Papers on the theory in these areas will also be accepted provided the theoretical contribution is aimed at the application and the development of process control techniques.
Topics covered include:
• Control applications• Process monitoring• Plant-wide control• Process control systems• Control techniques and algorithms• Process modelling and simulation• Design methods
Advanced design methods exclude well established and widely studied traditional design techniques such as PID tuning and its many variants. Applications in fields such as control of automotive engines, machinery and robotics are not deemed suitable unless a clear motivation for the relevance to process control is provided.