Explainable Regression Via Prototypes

Renato Miranda Filho, A. Lacerda, G. Pappa
{"title":"Explainable Regression Via Prototypes","authors":"Renato Miranda Filho, A. Lacerda, G. Pappa","doi":"10.1145/3576903","DOIUrl":null,"url":null,"abstract":"Model interpretability/explainability is increasingly a concern when applying machine learning to real-world problems. In this article, we are interested in explaining regression models by exploiting prototypes, which are exemplar cases in the problem domain. Previous works focused on finding prototypes that are representative of all training data but ignore the model predictions, i.e., they explain the data distribution but not necessarily the predictions. We propose a two-level model-agnostic method that considers prototypes to provide global and local explanations for regression problems and that account for both the input features and the model output. M-PEER (Multiobjective Prototype-basEd Explanation for Regression) is based on a multi-objective evolutionary method that optimizes both the error of the explainable model and two other “semantics”-based measures of interpretability adapted from the context of classification, namely, model fidelity and stability. We compare the proposed method with the state-of-the-art method based on prototypes for explanation—ProtoDash—and with other methods widely used in correlated areas of machine learning, such as instance selection and clustering. We conduct experiments on 25 datasets, and results demonstrate significant gains of M-PEER over other strategies, with an average of 12% improvement in the proposed metrics (i.e., model fidelity and stability) and 17% in root mean squared error (RMSE) when compared to ProtoDash.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Evolutionary Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3576903","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Model interpretability/explainability is increasingly a concern when applying machine learning to real-world problems. In this article, we are interested in explaining regression models by exploiting prototypes, which are exemplar cases in the problem domain. Previous works focused on finding prototypes that are representative of all training data but ignore the model predictions, i.e., they explain the data distribution but not necessarily the predictions. We propose a two-level model-agnostic method that considers prototypes to provide global and local explanations for regression problems and that account for both the input features and the model output. M-PEER (Multiobjective Prototype-basEd Explanation for Regression) is based on a multi-objective evolutionary method that optimizes both the error of the explainable model and two other “semantics”-based measures of interpretability adapted from the context of classification, namely, model fidelity and stability. We compare the proposed method with the state-of-the-art method based on prototypes for explanation—ProtoDash—and with other methods widely used in correlated areas of machine learning, such as instance selection and clustering. We conduct experiments on 25 datasets, and results demonstrate significant gains of M-PEER over other strategies, with an average of 12% improvement in the proposed metrics (i.e., model fidelity and stability) and 17% in root mean squared error (RMSE) when compared to ProtoDash.
通过原型的可解释回归
在将机器学习应用于现实世界问题时,模型的可解释性/可解释性日益受到关注。在本文中,我们感兴趣的是通过利用原型来解释回归模型,原型是问题领域中的范例案例。以前的工作集中在寻找代表所有训练数据的原型,但忽略了模型预测,也就是说,它们解释了数据分布,但不一定解释了预测。我们提出了一种两级模型不可知方法,该方法考虑原型为回归问题提供全局和局部解释,并考虑输入特征和模型输出。M-PEER(基于多目标原型的回归解释)基于一种多目标进化方法,该方法既优化了可解释模型的误差,也优化了另外两个基于“语义”的可解释性指标,即模型保真度和稳定性。我们将提出的方法与基于解释原型的最先进方法protodash以及在机器学习相关领域(如实例选择和聚类)广泛使用的其他方法进行了比较。我们在25个数据集上进行了实验,结果表明M-PEER比其他策略有显著的收益,与ProtoDash相比,提议的指标(即模型保真度和稳定性)平均提高了12%,均方根误差(RMSE)平均提高了17%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信