Ángela R. Troncoso-García;María Martínez-Ballesteros;Francisco Martínez-Álvarez;Alicia Troncoso
{"title":"A New Metric Based on Association Rules to Assess Feature-Attribution Explainability Techniques for Time Series Forecasting","authors":"Ángela R. Troncoso-García;María Martínez-Ballesteros;Francisco Martínez-Álvarez;Alicia Troncoso","doi":"10.1109/TPAMI.2025.3540513","DOIUrl":null,"url":null,"abstract":"This paper introduces a new, model-independent, metric, called RExQUAL, for quantifying the quality of explanations provided by attribution-based explainable artificial intelligence techniques and compare them. The underlying idea is based on feature attribution, using a subset of the ranking of the attributes highlighted by a model-agnostic explainable method in a forecasting task. Then, association rules are generated using these key attributes as input data. Novel metrics, including global support and confidence, are proposed to assess the joint quality of generated rules. Finally, the quality of the explanations is calculated based on a wise and comprehensive combination of the association rules global metrics. The proposed method integrates local explanations through attribution-based approaches for evaluation and feature selection with global explanations for the entire dataset. This paper rigorously evaluates the new metric by comparing three explainability techniques: the widely used SHAP and LIME, and the novel methodology RULEx. The experimental design includes predicting time series of different natures, including univariate and multivariate, through deep learning models. The results underscore the efficacy and versatility of the proposed methodology as a quantitative framework for evaluating and comparing explainable techniques.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 5","pages":"4140-4155"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879535","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10879535/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper introduces a new, model-independent, metric, called RExQUAL, for quantifying the quality of explanations provided by attribution-based explainable artificial intelligence techniques and compare them. The underlying idea is based on feature attribution, using a subset of the ranking of the attributes highlighted by a model-agnostic explainable method in a forecasting task. Then, association rules are generated using these key attributes as input data. Novel metrics, including global support and confidence, are proposed to assess the joint quality of generated rules. Finally, the quality of the explanations is calculated based on a wise and comprehensive combination of the association rules global metrics. The proposed method integrates local explanations through attribution-based approaches for evaluation and feature selection with global explanations for the entire dataset. This paper rigorously evaluates the new metric by comparing three explainability techniques: the widely used SHAP and LIME, and the novel methodology RULEx. The experimental design includes predicting time series of different natures, including univariate and multivariate, through deep learning models. The results underscore the efficacy and versatility of the proposed methodology as a quantitative framework for evaluating and comparing explainable techniques.