Generative Local Interpretable Model-Agnostic Explanations

Mohammad Nagahisarchoghaei, Mirhossein Mousavi Karimi, Shahram Rahimi, Logan Cummins, Ghodsieh Ghanbari
{"title":"Generative Local Interpretable Model-Agnostic Explanations","authors":"Mohammad Nagahisarchoghaei, Mirhossein Mousavi Karimi, Shahram Rahimi, Logan Cummins, Ghodsieh Ghanbari","doi":"10.32473/flairs.36.133378","DOIUrl":null,"url":null,"abstract":"The use of AI and machine learning models in the industry is rapidly increasing. Because of this growth and the noticeable performance of these models, more mission-critical decision-making intelligent systems have been developed. Despite their success, when used for decision-making, AI solutions have a significant drawback: transparency. The lack of transparency behind their behaviors, particularly in complex state-of-the-art machine learning algorithms, leaves users with little understanding of how these models make specific decisions. To address this issue, algorithms such as LIME and SHAP (Kernel SHAP) have been introduced. These algorithms aim to explain AI models by generating data samples around an intended test instance by perturbing the various features. This process has the drawback of potentially generating invalid data points outside of the data domain. In this paper, we aim to improve LIME and SHAP by using a pre-trained Variational AutoEncoder (VAE) on the training dataset to generate realistic data around the test instance. We also employ a sensitivity feature importance with Boltzmann distribution to aid in explaining the behavior of the black-box model surrounding the intended test instance.","PeriodicalId":302103,"journal":{"name":"The International FLAIRS Conference Proceedings","volume":"120 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International FLAIRS Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32473/flairs.36.133378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The use of AI and machine learning models in the industry is rapidly increasing. Because of this growth and the noticeable performance of these models, more mission-critical decision-making intelligent systems have been developed. Despite their success, when used for decision-making, AI solutions have a significant drawback: transparency. The lack of transparency behind their behaviors, particularly in complex state-of-the-art machine learning algorithms, leaves users with little understanding of how these models make specific decisions. To address this issue, algorithms such as LIME and SHAP (Kernel SHAP) have been introduced. These algorithms aim to explain AI models by generating data samples around an intended test instance by perturbing the various features. This process has the drawback of potentially generating invalid data points outside of the data domain. In this paper, we aim to improve LIME and SHAP by using a pre-trained Variational AutoEncoder (VAE) on the training dataset to generate realistic data around the test instance. We also employ a sensitivity feature importance with Boltzmann distribution to aid in explaining the behavior of the black-box model surrounding the intended test instance.
生成局部可解释模型不可知的解释
人工智能和机器学习模型在行业中的使用正在迅速增加。由于这种增长和这些模型的显著性能,越来越多的关键任务决策智能系统被开发出来。尽管取得了成功,但当用于决策时,人工智能解决方案有一个明显的缺点:透明度。它们的行为背后缺乏透明度,尤其是在复杂的最先进的机器学习算法中,这使得用户对这些模型如何做出具体决策知之甚少。为了解决这个问题,引入了LIME和SHAP(内核SHAP)等算法。这些算法旨在通过干扰各种特征,在预期的测试实例周围生成数据样本,从而解释人工智能模型。此过程的缺点是可能在数据域之外生成无效的数据点。在本文中,我们的目标是通过在训练数据集上使用预训练的变分自编码器(VAE)来生成围绕测试实例的真实数据,从而改进LIME和SHAP。我们还采用玻尔兹曼分布的灵敏度特征重要性来帮助解释围绕预期测试实例的黑盒模型的行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信