Christian Utama, B. Karg, Christian Meske, S. Lucia
{"title":"基于深度学习的模型预测控制器的可解释人工智能","authors":"Christian Utama, B. Karg, Christian Meske, S. Lucia","doi":"10.1109/ICSTCC55426.2022.9931794","DOIUrl":null,"url":null,"abstract":"Model predictive control (MPC) has been established in a wide range of control applications as the standard approach. But applying MPC requires solving a potentially complex optimization problem online to generate a new control input signal. To avoid the expensive online computations, deep learning-based MPC has been developed, in which neural networks imitate the behavior of the MPC. When such a data-driven approximate controller is derived, there is no straightforward way to trace the reasons for its proposed actions back to its inputs, hence making the controller a black-box model. In this paper, we propose the use of SHAP, an explainable artifical intelligence technique, to generate insights from learning-based MPC for the purpose of model debugging and simplification. Our results show that SHAP can explain general control behaviors and can also support model simplification in an informed way, representing a better alternative to dimensionality reduction techniques such as principal component analysis.","PeriodicalId":220845,"journal":{"name":"2022 26th International Conference on System Theory, Control and Computing (ICSTCC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable artificial intelligence for deep learning-based model predictive controllers\",\"authors\":\"Christian Utama, B. Karg, Christian Meske, S. Lucia\",\"doi\":\"10.1109/ICSTCC55426.2022.9931794\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Model predictive control (MPC) has been established in a wide range of control applications as the standard approach. But applying MPC requires solving a potentially complex optimization problem online to generate a new control input signal. To avoid the expensive online computations, deep learning-based MPC has been developed, in which neural networks imitate the behavior of the MPC. When such a data-driven approximate controller is derived, there is no straightforward way to trace the reasons for its proposed actions back to its inputs, hence making the controller a black-box model. In this paper, we propose the use of SHAP, an explainable artifical intelligence technique, to generate insights from learning-based MPC for the purpose of model debugging and simplification. Our results show that SHAP can explain general control behaviors and can also support model simplification in an informed way, representing a better alternative to dimensionality reduction techniques such as principal component analysis.\",\"PeriodicalId\":220845,\"journal\":{\"name\":\"2022 26th International Conference on System Theory, Control and Computing (ICSTCC)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 26th International Conference on System Theory, Control and Computing (ICSTCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSTCC55426.2022.9931794\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 26th International Conference on System Theory, Control and Computing (ICSTCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSTCC55426.2022.9931794","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explainable artificial intelligence for deep learning-based model predictive controllers
Model predictive control (MPC) has been established in a wide range of control applications as the standard approach. But applying MPC requires solving a potentially complex optimization problem online to generate a new control input signal. To avoid the expensive online computations, deep learning-based MPC has been developed, in which neural networks imitate the behavior of the MPC. When such a data-driven approximate controller is derived, there is no straightforward way to trace the reasons for its proposed actions back to its inputs, hence making the controller a black-box model. In this paper, we propose the use of SHAP, an explainable artifical intelligence technique, to generate insights from learning-based MPC for the purpose of model debugging and simplification. Our results show that SHAP can explain general control behaviors and can also support model simplification in an informed way, representing a better alternative to dimensionality reduction techniques such as principal component analysis.