Simon Vollert, Martin Atzmueller, Andreas Theissler
{"title":"可解释性机器学习:从预测性维护的角度简述","authors":"Simon Vollert, Martin Atzmueller, Andreas Theissler","doi":"10.1109/ETFA45728.2021.9613467","DOIUrl":null,"url":null,"abstract":"In the field of predictive maintenance (PdM), machine learning (ML) has gained importance over the last years. Accompanying this development, an increasing number of papers use non-interpretable ML to address PdM problems. While ML has achieved unprecedented performance in recent years, the lack of model explainability or interpretability may manifest itself in a lack of trust. The interpretability of ML models is researched under the terms explainable AI (XAI) and interpretable ML. In this paper, we review publications addressing PdM problems which are motivated by model interpretability. This comprises intrinsically interpretable models and post-hoc explanations. We identify challenges of interpretable ML for PdM, including (1) evaluation of interpretability, (2) the observation that explanation methods explaining black box models may show black box behavior themselves, (3) non-consistent use of terminology, (4) a lack of research for time series data, (5) coverage of explanations, and finally (6) the inclusion of domain knowledge,","PeriodicalId":312498,"journal":{"name":"2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA )","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"35","resultStr":"{\"title\":\"Interpretable Machine Learning: A brief survey from the predictive maintenance perspective\",\"authors\":\"Simon Vollert, Martin Atzmueller, Andreas Theissler\",\"doi\":\"10.1109/ETFA45728.2021.9613467\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the field of predictive maintenance (PdM), machine learning (ML) has gained importance over the last years. Accompanying this development, an increasing number of papers use non-interpretable ML to address PdM problems. While ML has achieved unprecedented performance in recent years, the lack of model explainability or interpretability may manifest itself in a lack of trust. The interpretability of ML models is researched under the terms explainable AI (XAI) and interpretable ML. In this paper, we review publications addressing PdM problems which are motivated by model interpretability. This comprises intrinsically interpretable models and post-hoc explanations. We identify challenges of interpretable ML for PdM, including (1) evaluation of interpretability, (2) the observation that explanation methods explaining black box models may show black box behavior themselves, (3) non-consistent use of terminology, (4) a lack of research for time series data, (5) coverage of explanations, and finally (6) the inclusion of domain knowledge,\",\"PeriodicalId\":312498,\"journal\":{\"name\":\"2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA )\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"35\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA )\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ETFA45728.2021.9613467\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA )","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETFA45728.2021.9613467","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Interpretable Machine Learning: A brief survey from the predictive maintenance perspective
In the field of predictive maintenance (PdM), machine learning (ML) has gained importance over the last years. Accompanying this development, an increasing number of papers use non-interpretable ML to address PdM problems. While ML has achieved unprecedented performance in recent years, the lack of model explainability or interpretability may manifest itself in a lack of trust. The interpretability of ML models is researched under the terms explainable AI (XAI) and interpretable ML. In this paper, we review publications addressing PdM problems which are motivated by model interpretability. This comprises intrinsically interpretable models and post-hoc explanations. We identify challenges of interpretable ML for PdM, including (1) evaluation of interpretability, (2) the observation that explanation methods explaining black box models may show black box behavior themselves, (3) non-consistent use of terminology, (4) a lack of research for time series data, (5) coverage of explanations, and finally (6) the inclusion of domain knowledge,