{"title":"Efficiently Learning a Distributed Control Policy in Cyber-Physical Production Systems Via Simulation Optimization","authors":"M. Zou, Edward Huang, B. Vogel‐Heuser, C.-H. Chen","doi":"10.1109/CASE48305.2020.9249228","DOIUrl":null,"url":null,"abstract":"The manufacturing industry is becoming more dynamic than ever. The limitations of non-deterministic network delays and real-time requirements call for decentralized control. For such dynamic and complex systems, learning methods stand out as a transformational technology to have a more flexible control solution. Using simulation for learning enables the description of highly dynamic systems and provides samples without occupying a real facility. However, it requires prohibitively expensive computation. In this paper, we argue that simulation optimization is a powerful tool that can be applied to various simulation-based learning processes for tremendous effects. We proposed an efficient policy learning framework, ROSA (Reinforcement-learning enhanced by Optimal Simulation Allocation), with unprecedented integration of learning, control, and simulation optimization techniques, which can drastically improve the efficiency of policy learning in a cyber-physical system. A proof-of-concept is implemented on a conveyer-switch network, demonstrating how ROSA can be applied for efficient policy learning, with an emphasis on the industrial distributed control system.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CASE48305.2020.9249228","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The manufacturing industry is becoming more dynamic than ever. The limitations of non-deterministic network delays and real-time requirements call for decentralized control. For such dynamic and complex systems, learning methods stand out as a transformational technology to have a more flexible control solution. Using simulation for learning enables the description of highly dynamic systems and provides samples without occupying a real facility. However, it requires prohibitively expensive computation. In this paper, we argue that simulation optimization is a powerful tool that can be applied to various simulation-based learning processes for tremendous effects. We proposed an efficient policy learning framework, ROSA (Reinforcement-learning enhanced by Optimal Simulation Allocation), with unprecedented integration of learning, control, and simulation optimization techniques, which can drastically improve the efficiency of policy learning in a cyber-physical system. A proof-of-concept is implemented on a conveyer-switch network, demonstrating how ROSA can be applied for efficient policy learning, with an emphasis on the industrial distributed control system.