{"title":"Collaborative twin actors framework using deep deterministic policy gradient for flexible batch processes","authors":"Xindong Wang , Zidong Liu , Junghui Chen","doi":"10.1016/j.neunet.2025.107461","DOIUrl":null,"url":null,"abstract":"<div><div>Due to its inherent efficiency in the process industry for achieving desired products, batch processing is widely acknowledged for its repetitive nature. Batch-to-batch learning control has traditionally been esteemed as a robust strategy for batch process control. However, the presence of flexible operating conditions in practical batch systems often leads to a lack of prior learning information, hindering learning control from optimizing performance. This article presents a novel approach to flexible batch process control using deep reinforcement learning (DRL) with twin actors. Specifically, a collaborative twin-actor-based deep deterministic policy gradient (CTA-DDPG) method is proposed to generate control policies and ensure safe operation across varying trial lengths and initial conditions. This approach involves the sequential construction of two sets of actor–critic networks with a shared critic. The first set explores meta-policy during an offline stage, while the second set enhances control performance using a supplementary agent during an online stage. To ensure robust policy transfer and efficient learning, a policy integration mechanism and a spatial–temporal experience replay strategy are incorporated, facilitating transfer stability and learning efficiency. The performance of CTA-DDPG is evaluated using both numerical examples and nonlinear injection molding process for tracking control. The results demonstrate the effectiveness and superiority of the proposed method in achieving desired control outcomes.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107461"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025003405","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Due to its inherent efficiency in the process industry for achieving desired products, batch processing is widely acknowledged for its repetitive nature. Batch-to-batch learning control has traditionally been esteemed as a robust strategy for batch process control. However, the presence of flexible operating conditions in practical batch systems often leads to a lack of prior learning information, hindering learning control from optimizing performance. This article presents a novel approach to flexible batch process control using deep reinforcement learning (DRL) with twin actors. Specifically, a collaborative twin-actor-based deep deterministic policy gradient (CTA-DDPG) method is proposed to generate control policies and ensure safe operation across varying trial lengths and initial conditions. This approach involves the sequential construction of two sets of actor–critic networks with a shared critic. The first set explores meta-policy during an offline stage, while the second set enhances control performance using a supplementary agent during an online stage. To ensure robust policy transfer and efficient learning, a policy integration mechanism and a spatial–temporal experience replay strategy are incorporated, facilitating transfer stability and learning efficiency. The performance of CTA-DDPG is evaluated using both numerical examples and nonlinear injection molding process for tracking control. The results demonstrate the effectiveness and superiority of the proposed method in achieving desired control outcomes.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.