João Gama , Rita P. Ribeiro , Saulo Mastelini , Narjes Davari , Bruno Veloso
{"title":"From fault detection to anomaly explanation: A case study on predictive maintenance","authors":"João Gama , Rita P. Ribeiro , Saulo Mastelini , Narjes Davari , Bruno Veloso","doi":"10.1016/j.websem.2024.100821","DOIUrl":null,"url":null,"abstract":"<div><p>Predictive Maintenance applications are increasingly complex, with interactions between many components. Black-box models are popular approaches based on deep-learning techniques due to their predictive accuracy. This paper proposes a neural-symbolic architecture that uses an online rule-learning algorithm to explain when the black-box model predicts failures. The proposed system solves two problems in parallel: (i) anomaly detection and (ii) explanation of the anomaly. For the first problem, we use an unsupervised state-of-the-art autoencoder. For the second problem, we train a rule learning system that learns a mapping from the input features to the autoencoder’s reconstruction error. Both systems run online and in parallel. The autoencoder signals an alarm for the examples with a reconstruction error that exceeds a threshold. The causes of the signal alarm are hard for humans to understand because they result from a non-linear combination of sensor data. The rule that triggers that example describes the relationship between the input features and the autoencoder’s reconstruction error. The rule explains the failure signal by indicating which sensors contribute to the alarm and allowing the identification of the component involved in the failure. The system can present global explanations for the black box model and local explanations for why the black box model predicts a failure. We evaluate the proposed system in a real-world case study of Metro do Porto and provide explanations that illustrate its benefits.</p></div>","PeriodicalId":49951,"journal":{"name":"Journal of Web Semantics","volume":"81 ","pages":"Article 100821"},"PeriodicalIF":2.1000,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1570826824000076/pdfft?md5=5ac7d7b9118cab57dfc9acc7b6e52d40&pid=1-s2.0-S1570826824000076-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Web Semantics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1570826824000076","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Predictive Maintenance applications are increasingly complex, with interactions between many components. Black-box models are popular approaches based on deep-learning techniques due to their predictive accuracy. This paper proposes a neural-symbolic architecture that uses an online rule-learning algorithm to explain when the black-box model predicts failures. The proposed system solves two problems in parallel: (i) anomaly detection and (ii) explanation of the anomaly. For the first problem, we use an unsupervised state-of-the-art autoencoder. For the second problem, we train a rule learning system that learns a mapping from the input features to the autoencoder’s reconstruction error. Both systems run online and in parallel. The autoencoder signals an alarm for the examples with a reconstruction error that exceeds a threshold. The causes of the signal alarm are hard for humans to understand because they result from a non-linear combination of sensor data. The rule that triggers that example describes the relationship between the input features and the autoencoder’s reconstruction error. The rule explains the failure signal by indicating which sensors contribute to the alarm and allowing the identification of the component involved in the failure. The system can present global explanations for the black box model and local explanations for why the black box model predicts a failure. We evaluate the proposed system in a real-world case study of Metro do Porto and provide explanations that illustrate its benefits.
预测性维护应用越来越复杂,许多组件之间都会产生相互作用。黑盒模型因其预测准确性而成为基于深度学习技术的流行方法。本文提出了一种神经符号架构,该架构使用在线规则学习算法来解释黑盒模型何时预测故障。该系统同时解决两个问题:(i) 异常检测和 (ii) 异常解释。对于第一个问题,我们使用无监督的最先进自动编码器。对于第二个问题,我们训练一个规则学习系统,学习从输入特征到自动编码器重构误差的映射。两个系统在线并行运行。对于重构误差超过阈值的示例,自动编码器会发出警报信号。人类很难理解信号报警的原因,因为它们是传感器数据非线性组合的结果。触发该示例的规则描述了输入特征与自动编码器重构误差之间的关系。该规则对故障信号进行了解释,指出哪些传感器导致了警报,并允许识别故障涉及的组件。该系统可以对黑盒模型进行全局解释,也可以对黑盒模型预测故障的原因进行局部解释。我们在波尔图地铁的实际案例研究中对所提议的系统进行了评估,并提供了说明其优点的解释。
期刊介绍:
The Journal of Web Semantics is an interdisciplinary journal based on research and applications of various subject areas that contribute to the development of a knowledge-intensive and intelligent service Web. These areas include: knowledge technologies, ontology, agents, databases and the semantic grid, obviously disciplines like information retrieval, language technology, human-computer interaction and knowledge discovery are of major relevance as well. All aspects of the Semantic Web development are covered. The publication of large-scale experiments and their analysis is also encouraged to clearly illustrate scenarios and methods that introduce semantics into existing Web interfaces, contents and services. The journal emphasizes the publication of papers that combine theories, methods and experiments from different subject areas in order to deliver innovative semantic methods and applications.