通过产生局部有意义的扰动来解释黑箱预测

Tejaswani Verma, Christoph Lingenfelder, D. Klakow
{"title":"通过产生局部有意义的扰动来解释黑箱预测","authors":"Tejaswani Verma, Christoph Lingenfelder, D. Klakow","doi":"10.1109/TransAI51903.2021.00030","DOIUrl":null,"url":null,"abstract":"Generating explanations of predictions made by machine learning models is a difficult task, especially for black-box models. One possible way to explain an individual decision or recommendation for a given instance is to build an interpretable local surrogate for the underlying black-box model in the vicinity of the given instance. This approach has been adopted by many algorithms, for example LIME and LEAFAGE. These algorithms suffer from shortcomings, strict assumptions and prerequisites, which not only limit their applicability but also affect black-box fidelity of their local approximations. We present ways to overcome their shortcomings including the definition of neighborhood, removal of prerequisites and assumption of linearity in local model. The main contribution of this paper is a novel algorithm (LEMP) which provides explanation for the given instance by building a surrogate model using generated perturbations in the neighborhood of the given instance as training data. Experiments show that our approach is more widely applicable and generates interpretable models with better fidelity to the underlying black-box model than previous algorithms.","PeriodicalId":426766,"journal":{"name":"2021 Third International Conference on Transdisciplinary AI (TransAI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Explaining Black-box Predictions by Generating Local Meaningful Perturbations\",\"authors\":\"Tejaswani Verma, Christoph Lingenfelder, D. Klakow\",\"doi\":\"10.1109/TransAI51903.2021.00030\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generating explanations of predictions made by machine learning models is a difficult task, especially for black-box models. One possible way to explain an individual decision or recommendation for a given instance is to build an interpretable local surrogate for the underlying black-box model in the vicinity of the given instance. This approach has been adopted by many algorithms, for example LIME and LEAFAGE. These algorithms suffer from shortcomings, strict assumptions and prerequisites, which not only limit their applicability but also affect black-box fidelity of their local approximations. We present ways to overcome their shortcomings including the definition of neighborhood, removal of prerequisites and assumption of linearity in local model. The main contribution of this paper is a novel algorithm (LEMP) which provides explanation for the given instance by building a surrogate model using generated perturbations in the neighborhood of the given instance as training data. Experiments show that our approach is more widely applicable and generates interpretable models with better fidelity to the underlying black-box model than previous algorithms.\",\"PeriodicalId\":426766,\"journal\":{\"name\":\"2021 Third International Conference on Transdisciplinary AI (TransAI)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Third International Conference on Transdisciplinary AI (TransAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TransAI51903.2021.00030\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Third International Conference on Transdisciplinary AI (TransAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TransAI51903.2021.00030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

对机器学习模型做出的预测进行解释是一项艰巨的任务,尤其是对黑箱模型而言。解释给定实例的单个决策或建议的一种可能方法是在给定实例附近为底层黑箱模型构建一个可解释的本地代理。这种方法已经被许多算法所采用,例如LIME和LEAFAGE。这些算法存在缺陷,存在严格的假设和先决条件,不仅限制了它们的适用性,而且影响了局部逼近的黑盒保真度。我们提出了克服它们缺点的方法,包括邻域的定义、前提条件的去除和局部模型的线性假设。本文的主要贡献是一种新的算法(LEMP),该算法通过使用给定实例附近产生的扰动作为训练数据建立代理模型来为给定实例提供解释。实验表明,该方法具有更广泛的适用性,并且生成的可解释模型比以前的算法具有更好的对底层黑箱模型的保真度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Explaining Black-box Predictions by Generating Local Meaningful Perturbations
Generating explanations of predictions made by machine learning models is a difficult task, especially for black-box models. One possible way to explain an individual decision or recommendation for a given instance is to build an interpretable local surrogate for the underlying black-box model in the vicinity of the given instance. This approach has been adopted by many algorithms, for example LIME and LEAFAGE. These algorithms suffer from shortcomings, strict assumptions and prerequisites, which not only limit their applicability but also affect black-box fidelity of their local approximations. We present ways to overcome their shortcomings including the definition of neighborhood, removal of prerequisites and assumption of linearity in local model. The main contribution of this paper is a novel algorithm (LEMP) which provides explanation for the given instance by building a surrogate model using generated perturbations in the neighborhood of the given instance as training data. Experiments show that our approach is more widely applicable and generates interpretable models with better fidelity to the underlying black-box model than previous algorithms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信