Contextual Explanations for Decision Support in Predictive Maintenance

IF 2.5 4区 综合性期刊 Q2 CHEMISTRY, MULTIDISCIPLINARY
Michał Kozielski
{"title":"Contextual Explanations for Decision Support in Predictive Maintenance","authors":"Michał Kozielski","doi":"10.3390/app131810068","DOIUrl":null,"url":null,"abstract":"Explainable artificial intelligence (XAI) methods aim to explain to the user on what basis the model makes decisions. Unfortunately, general-purpose approaches that are independent of the types of data, model used and the level of sophistication of the user are not always able to make model decisions more comprehensible. An example of such a problem, which is considered in this paper, is a predictive maintenance task where a model identifying outliers in time series is applied. Typical explanations of the model’s decisions, which present the importance of the attributes, are not sufficient to support the user for such a task. Within the framework of this work, a visualisation and analysis of the context of local explanations presenting attribute importance are proposed. Two types of context for explanations are considered: local and global. They extend the information provided by typical explanations and offer the user greater insight into the validity of the alarms triggered by the model. Evaluation of the proposed context was performed on two time series representations: basic and extended. For the extended representation, an aggregation of explanations was used to make them more intuitive for the user. The results show the usefulness of the proposed context, particularly for the basic data representation. However, for the extended representation, the aggregation of explanations used is sometimes insufficient to provide a clear explanatory context. Therefore, the explanation using simplification with a surrogate model on basic data representation was proposed as a solution. The obtained results can be valuable for developers of decision support systems for predictive maintenance.","PeriodicalId":48760,"journal":{"name":"Applied Sciences-Basel","volume":null,"pages":null},"PeriodicalIF":2.5000,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Sciences-Basel","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.3390/app131810068","RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Explainable artificial intelligence (XAI) methods aim to explain to the user on what basis the model makes decisions. Unfortunately, general-purpose approaches that are independent of the types of data, model used and the level of sophistication of the user are not always able to make model decisions more comprehensible. An example of such a problem, which is considered in this paper, is a predictive maintenance task where a model identifying outliers in time series is applied. Typical explanations of the model’s decisions, which present the importance of the attributes, are not sufficient to support the user for such a task. Within the framework of this work, a visualisation and analysis of the context of local explanations presenting attribute importance are proposed. Two types of context for explanations are considered: local and global. They extend the information provided by typical explanations and offer the user greater insight into the validity of the alarms triggered by the model. Evaluation of the proposed context was performed on two time series representations: basic and extended. For the extended representation, an aggregation of explanations was used to make them more intuitive for the user. The results show the usefulness of the proposed context, particularly for the basic data representation. However, for the extended representation, the aggregation of explanations used is sometimes insufficient to provide a clear explanatory context. Therefore, the explanation using simplification with a surrogate model on basic data representation was proposed as a solution. The obtained results can be valuable for developers of decision support systems for predictive maintenance.
预测性维修中决策支持的上下文解释
可解释的人工智能(XAI)方法旨在向用户解释模型根据什么做出决策。不幸的是,独立于数据类型、所使用的模型和用户的复杂程度的通用方法并不总是能够使模型决策更易于理解。本文考虑的一个此类问题的示例是一个预测性维护任务,其中应用了识别时间序列中异常值的模型。模型决策的典型解释(表示属性的重要性)不足以支持用户完成这样的任务。在这项工作的框架内,对呈现属性重要性的局部解释的背景进行了可视化和分析。考虑了两种类型的解释上下文:局部和全局。它们扩展了典型解释所提供的信息,并让用户更深入地了解由模型触发的警报的有效性。对所提出的上下文进行了两种时间序列表示的评估:基本和扩展。对于扩展表示,使用了解释的聚合,使它们对用户更直观。结果表明了所提出的上下文的有效性,特别是对于基本数据表示。然而,对于扩展表示,所使用的解释的集合有时不足以提供一个明确的解释背景。因此,提出了在基本数据表示上使用代理模型简化解释的解决方案。所得结果对预测性维护决策支持系统的开发人员有价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Applied Sciences-Basel
Applied Sciences-Basel CHEMISTRY, MULTIDISCIPLINARYMATERIALS SCIE-MATERIALS SCIENCE, MULTIDISCIPLINARY
CiteScore
5.30
自引率
11.10%
发文量
10882
期刊介绍: Applied Sciences (ISSN 2076-3417) provides an advanced forum on all aspects of applied natural sciences. It publishes reviews, research papers and communications. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible. There is no restriction on the length of the papers. The full experimental details must be provided so that the results can be reproduced. Electronic files and software regarding the full details of the calculation or experimental procedure, if unable to be published in a normal way, can be deposited as supplementary electronic material.
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信