可解释性机器学习:从预测性维护的角度简述

Simon Vollert, Martin Atzmueller, Andreas Theissler
{"title":"可解释性机器学习:从预测性维护的角度简述","authors":"Simon Vollert, Martin Atzmueller, Andreas Theissler","doi":"10.1109/ETFA45728.2021.9613467","DOIUrl":null,"url":null,"abstract":"In the field of predictive maintenance (PdM), machine learning (ML) has gained importance over the last years. Accompanying this development, an increasing number of papers use non-interpretable ML to address PdM problems. While ML has achieved unprecedented performance in recent years, the lack of model explainability or interpretability may manifest itself in a lack of trust. The interpretability of ML models is researched under the terms explainable AI (XAI) and interpretable ML. In this paper, we review publications addressing PdM problems which are motivated by model interpretability. This comprises intrinsically interpretable models and post-hoc explanations. We identify challenges of interpretable ML for PdM, including (1) evaluation of interpretability, (2) the observation that explanation methods explaining black box models may show black box behavior themselves, (3) non-consistent use of terminology, (4) a lack of research for time series data, (5) coverage of explanations, and finally (6) the inclusion of domain knowledge,","PeriodicalId":312498,"journal":{"name":"2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA )","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"35","resultStr":"{\"title\":\"Interpretable Machine Learning: A brief survey from the predictive maintenance perspective\",\"authors\":\"Simon Vollert, Martin Atzmueller, Andreas Theissler\",\"doi\":\"10.1109/ETFA45728.2021.9613467\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the field of predictive maintenance (PdM), machine learning (ML) has gained importance over the last years. Accompanying this development, an increasing number of papers use non-interpretable ML to address PdM problems. While ML has achieved unprecedented performance in recent years, the lack of model explainability or interpretability may manifest itself in a lack of trust. The interpretability of ML models is researched under the terms explainable AI (XAI) and interpretable ML. In this paper, we review publications addressing PdM problems which are motivated by model interpretability. This comprises intrinsically interpretable models and post-hoc explanations. We identify challenges of interpretable ML for PdM, including (1) evaluation of interpretability, (2) the observation that explanation methods explaining black box models may show black box behavior themselves, (3) non-consistent use of terminology, (4) a lack of research for time series data, (5) coverage of explanations, and finally (6) the inclusion of domain knowledge,\",\"PeriodicalId\":312498,\"journal\":{\"name\":\"2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA )\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"35\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA )\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ETFA45728.2021.9613467\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA )","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETFA45728.2021.9613467","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 35

摘要

在预测性维护(PdM)领域,机器学习(ML)在过去几年中变得越来越重要。随着这一发展,越来越多的论文使用不可解释的ML来解决PdM问题。虽然ML近年来取得了前所未有的成绩,但缺乏模型可解释性或可解释性可能表现为缺乏信任。机器学习模型的可解释性在可解释AI (explainable AI, XAI)和可解释ML这两个术语下被研究。在本文中,我们回顾了解决由模型可解释性驱动的PdM问题的出版物。这包括本质上可解释的模型和事后解释。我们确定了PdM中可解释ML的挑战,包括(1)可解释性评估,(2)观察解释黑箱模型的解释方法可能会显示黑箱行为本身,(3)术语使用不一致,(4)缺乏对时间序列数据的研究,(5)解释的覆盖范围,最后(6)领域知识的包含。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Interpretable Machine Learning: A brief survey from the predictive maintenance perspective
In the field of predictive maintenance (PdM), machine learning (ML) has gained importance over the last years. Accompanying this development, an increasing number of papers use non-interpretable ML to address PdM problems. While ML has achieved unprecedented performance in recent years, the lack of model explainability or interpretability may manifest itself in a lack of trust. The interpretability of ML models is researched under the terms explainable AI (XAI) and interpretable ML. In this paper, we review publications addressing PdM problems which are motivated by model interpretability. This comprises intrinsically interpretable models and post-hoc explanations. We identify challenges of interpretable ML for PdM, including (1) evaluation of interpretability, (2) the observation that explanation methods explaining black box models may show black box behavior themselves, (3) non-consistent use of terminology, (4) a lack of research for time series data, (5) coverage of explanations, and finally (6) the inclusion of domain knowledge,
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信