{"title":"深度强化学习辅助扩展状态观测器用于半导体制造过程中的运行控制","authors":"Zhu Ma, Tianhong Pan","doi":"10.1177/01423312241229492","DOIUrl":null,"url":null,"abstract":"In the semiconductor manufacturing process, extended state observer (ESO)-based run-to-run (RtR) control is an intriguing solution. Although an ESO-RtR control strategy can effectively compensate for the lumped disturbance, appropriate gains are required. In this article, a cutting-edge deep reinforcement learning (DRL) technique is integrated into ESO-RtR, and a composite control framework of DRL-ESO-RtR is developed. In particular, the well-trained DRL agent serves as an assisted controller, which produces appropriate gains of ESO. The optimized ESO then presents a preferable control recipe for the manufacturing process. Under the RtR framework, the gain adjustment problem of ESO is formulated as a Markov decision process. An efficient state space and reward function are wisely designed using the system’s observable information. Correspondingly, the gain of the ESO is adaptively adjusted to cope with changing environmental disturbances. Finally, a twin-delayed deep deterministic policy gradient algorithm is employed to implement the suggested scheme. The feasibility and superiority of the developed method are validated in a deep reactive ion etching process. Comparative results demonstrate that the presented scheme outperforms the ordinary ESO-RtR controller in terms of disturbance rejection.","PeriodicalId":507087,"journal":{"name":"Transactions of the Institute of Measurement and Control","volume":"42 36","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep reinforcement learning-assisted extended state observer for run-to-run control in the semiconductor manufacturing process\",\"authors\":\"Zhu Ma, Tianhong Pan\",\"doi\":\"10.1177/01423312241229492\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the semiconductor manufacturing process, extended state observer (ESO)-based run-to-run (RtR) control is an intriguing solution. Although an ESO-RtR control strategy can effectively compensate for the lumped disturbance, appropriate gains are required. In this article, a cutting-edge deep reinforcement learning (DRL) technique is integrated into ESO-RtR, and a composite control framework of DRL-ESO-RtR is developed. In particular, the well-trained DRL agent serves as an assisted controller, which produces appropriate gains of ESO. The optimized ESO then presents a preferable control recipe for the manufacturing process. Under the RtR framework, the gain adjustment problem of ESO is formulated as a Markov decision process. An efficient state space and reward function are wisely designed using the system’s observable information. Correspondingly, the gain of the ESO is adaptively adjusted to cope with changing environmental disturbances. Finally, a twin-delayed deep deterministic policy gradient algorithm is employed to implement the suggested scheme. The feasibility and superiority of the developed method are validated in a deep reactive ion etching process. Comparative results demonstrate that the presented scheme outperforms the ordinary ESO-RtR controller in terms of disturbance rejection.\",\"PeriodicalId\":507087,\"journal\":{\"name\":\"Transactions of the Institute of Measurement and Control\",\"volume\":\"42 36\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transactions of the Institute of Measurement and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/01423312241229492\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions of the Institute of Measurement and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/01423312241229492","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep reinforcement learning-assisted extended state observer for run-to-run control in the semiconductor manufacturing process
In the semiconductor manufacturing process, extended state observer (ESO)-based run-to-run (RtR) control is an intriguing solution. Although an ESO-RtR control strategy can effectively compensate for the lumped disturbance, appropriate gains are required. In this article, a cutting-edge deep reinforcement learning (DRL) technique is integrated into ESO-RtR, and a composite control framework of DRL-ESO-RtR is developed. In particular, the well-trained DRL agent serves as an assisted controller, which produces appropriate gains of ESO. The optimized ESO then presents a preferable control recipe for the manufacturing process. Under the RtR framework, the gain adjustment problem of ESO is formulated as a Markov decision process. An efficient state space and reward function are wisely designed using the system’s observable information. Correspondingly, the gain of the ESO is adaptively adjusted to cope with changing environmental disturbances. Finally, a twin-delayed deep deterministic policy gradient algorithm is employed to implement the suggested scheme. The feasibility and superiority of the developed method are validated in a deep reactive ion etching process. Comparative results demonstrate that the presented scheme outperforms the ordinary ESO-RtR controller in terms of disturbance rejection.