Dorothea Schwung , Steve Yuwono , Andreas Schwung , Steven X. Ding
{"title":"使用plc信息强化学习的能源最优生产策略的分散学习","authors":"Dorothea Schwung , Steve Yuwono , Andreas Schwung , Steven X. Ding","doi":"10.1016/j.compchemeng.2021.107382","DOIUrl":null,"url":null,"abstract":"<div><p><span>This paper presents a novel approach to distributed optimization in production systems using reinforcement learning (RL) with particular emphasis on energy efficient production. Unlike existing approaches in RL which learn optimal policies from scratch, we speed-up the learning process by considering available control code of the Programmable Logic Controller (PLC) as a baseline for further optimizations. For such PLC-informed RL, we propose Teacher-Student RL to distill the available control code of the individual modules into a neural network which is subsequently optimized using standard RL. The proposed general framework allows to incorporate PLC control code with different level of detail. We implement the approach on a laboratory scale testbed representing different production scenarios ranging from continuous to </span>batch production. We compare the results for different control strategies which show that comparably simple control logic yields considerable improvements of the optimization compared to learning from scratch. The obtained results underline the applicability and potential of the approach in terms of improved production efficiency while considerably reducing the energy consumption of the production schedules.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"152 ","pages":"Article 107382"},"PeriodicalIF":3.9000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.compchemeng.2021.107382","citationCount":"14","resultStr":"{\"title\":\"Decentralized learning of energy optimal production policies using PLC-informed reinforcement learning\",\"authors\":\"Dorothea Schwung , Steve Yuwono , Andreas Schwung , Steven X. Ding\",\"doi\":\"10.1016/j.compchemeng.2021.107382\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span>This paper presents a novel approach to distributed optimization in production systems using reinforcement learning (RL) with particular emphasis on energy efficient production. Unlike existing approaches in RL which learn optimal policies from scratch, we speed-up the learning process by considering available control code of the Programmable Logic Controller (PLC) as a baseline for further optimizations. For such PLC-informed RL, we propose Teacher-Student RL to distill the available control code of the individual modules into a neural network which is subsequently optimized using standard RL. The proposed general framework allows to incorporate PLC control code with different level of detail. We implement the approach on a laboratory scale testbed representing different production scenarios ranging from continuous to </span>batch production. We compare the results for different control strategies which show that comparably simple control logic yields considerable improvements of the optimization compared to learning from scratch. The obtained results underline the applicability and potential of the approach in terms of improved production efficiency while considerably reducing the energy consumption of the production schedules.</p></div>\",\"PeriodicalId\":286,\"journal\":{\"name\":\"Computers & Chemical Engineering\",\"volume\":\"152 \",\"pages\":\"Article 107382\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1016/j.compchemeng.2021.107382\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Chemical Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0098135421001605\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Chemical Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0098135421001605","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Decentralized learning of energy optimal production policies using PLC-informed reinforcement learning
This paper presents a novel approach to distributed optimization in production systems using reinforcement learning (RL) with particular emphasis on energy efficient production. Unlike existing approaches in RL which learn optimal policies from scratch, we speed-up the learning process by considering available control code of the Programmable Logic Controller (PLC) as a baseline for further optimizations. For such PLC-informed RL, we propose Teacher-Student RL to distill the available control code of the individual modules into a neural network which is subsequently optimized using standard RL. The proposed general framework allows to incorporate PLC control code with different level of detail. We implement the approach on a laboratory scale testbed representing different production scenarios ranging from continuous to batch production. We compare the results for different control strategies which show that comparably simple control logic yields considerable improvements of the optimization compared to learning from scratch. The obtained results underline the applicability and potential of the approach in terms of improved production efficiency while considerably reducing the energy consumption of the production schedules.
期刊介绍:
Computers & Chemical Engineering is primarily a journal of record for new developments in the application of computing and systems technology to chemical engineering problems.