Instance-Wise Causal Feature Selection Explainer for Rotating Machinery Fault Diagnosis

Chang Guo, Zuogang Shang, Jiaxin Ren, Zhibin Zhao, Shibin Wang, Xuefeng Chen
{"title":"Instance-Wise Causal Feature Selection Explainer for Rotating Machinery Fault Diagnosis","authors":"Chang Guo, Zuogang Shang, Jiaxin Ren, Zhibin Zhao, Shibin Wang, Xuefeng Chen","doi":"10.1109/ICSMD57530.2022.10058059","DOIUrl":null,"url":null,"abstract":"Artificial neural networks in prognostics and health management (PHM), especially in intelligent fault diagnosis (IFD) have made great progress but possess black-box nature, leading to lack of interpretability and weak robustness when facing complex environment variations. When environment changes, the model tends to make wrong decisions leading to a cost, especially for major equipment if easily trusted by the users. Researchers have made studies on eXplainable Artificial Intelligence (XAI) based IFD to better understand the models. Most of them express their interpretability in the way of drawing gradient-based saliency maps to show where the model focuses on, which is of little consideration for causal effect and not sparse enough without quantitative metrics. To address these issues, we design an XAI method that utilizes a neural network as an instance-wise feature selector to select frequency bands that have stronger causal strength with the diagnosis result than others and further explain the diagnosis model. We quantify causal strength with the relative entropy distance (RED) and treat the simplified RED as the objective function for the optimization of the selector model. Finally, our experiments demonstrate the superiority of our method over another algorithm L2X measured by post-hoc accuracy (PHA), variant average causal effect (ACE), and vision plots.","PeriodicalId":396735,"journal":{"name":"2022 International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence (ICSMD)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence (ICSMD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSMD57530.2022.10058059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial neural networks in prognostics and health management (PHM), especially in intelligent fault diagnosis (IFD) have made great progress but possess black-box nature, leading to lack of interpretability and weak robustness when facing complex environment variations. When environment changes, the model tends to make wrong decisions leading to a cost, especially for major equipment if easily trusted by the users. Researchers have made studies on eXplainable Artificial Intelligence (XAI) based IFD to better understand the models. Most of them express their interpretability in the way of drawing gradient-based saliency maps to show where the model focuses on, which is of little consideration for causal effect and not sparse enough without quantitative metrics. To address these issues, we design an XAI method that utilizes a neural network as an instance-wise feature selector to select frequency bands that have stronger causal strength with the diagnosis result than others and further explain the diagnosis model. We quantify causal strength with the relative entropy distance (RED) and treat the simplified RED as the objective function for the optimization of the selector model. Finally, our experiments demonstrate the superiority of our method over another algorithm L2X measured by post-hoc accuracy (PHA), variant average causal effect (ACE), and vision plots.
基于实例的旋转机械故障诊断因果特征选择解释器
人工神经网络在预测和健康管理(PHM),特别是智能故障诊断(IFD)方面取得了很大的进展,但由于其存在黑箱特性,导致其在面对复杂环境变化时缺乏可解释性和较弱的鲁棒性。当环境发生变化时,模型往往会做出错误的决策,从而导致成本,特别是对于容易被用户信任的大型设备。研究人员对基于IFD的可解释人工智能(XAI)进行了研究,以更好地理解这些模型。它们大多以绘制基于梯度的显著性图的方式来表达其可解释性,以显示模型关注的位置,这种方式很少考虑因果效应,没有定量指标也不够稀疏。为了解决这些问题,我们设计了一种XAI方法,该方法利用神经网络作为实例特征选择器来选择与诊断结果具有更强因果关系的频带,并进一步解释诊断模型。我们用相对熵距离(RED)来量化因果强度,并将简化后的RED作为优化选择器模型的目标函数。最后,我们的实验证明了我们的方法优于另一种算法L2X,该算法采用事后精度(PHA)、变异平均因果效应(ACE)和视觉图来衡量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信