Achievable Minimally-Contrastive Counterfactual Explanations

IF 4 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
H. Barzekar, S. McRoy
{"title":"Achievable Minimally-Contrastive Counterfactual Explanations","authors":"H. Barzekar, S. McRoy","doi":"10.3390/make5030048","DOIUrl":null,"url":null,"abstract":"Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.","PeriodicalId":93033,"journal":{"name":"Machine learning and knowledge extraction","volume":"32 1","pages":"922-936"},"PeriodicalIF":4.0000,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning and knowledge extraction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/make5030048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as “What factors affect sales?” or “Why did sales decline?”, but do not highlight what a person should or could do to get a more desirable outcome. Counterfactual explanation approaches address intervention, and some even consider feasibility, but none consider their suitability for real-time applications, such as question answering. Here, we address this gap by introducing a novel model-agnostic method that provides specific, feasible changes that would impact the outcomes of a complex Black Box AI model for a given instance and assess its real-world utility by measuring its real-time performance and ability to find achievable changes. The method uses the instance of concern to generate high-precision explanations and then applies a secondary method to find achievable minimally-contrastive counterfactual explanations (AMCC) while limiting the search to modifications that satisfy domain-specific constraints. Using a widely recognized dataset, we evaluated the classification task to ascertain the frequency and time required to identify successful counterfactuals. For a 90% accurate classifier, our algorithm identified AMCC explanations in 47% of cases (38 of 81), with an average discovery time of 80 ms. These findings verify the algorithm’s efficiency in swiftly producing AMCC explanations, suitable for real-time systems. The AMCC method enhances the transparency of Black Box AI models, aiding individuals in evaluating remedial strategies or assessing potential outcomes.
可实现的最小对比反事实解释
基于机器学习模型的决策支持系统应该能够帮助用户识别机会和威胁。流行的模型不可知论解释模型可以识别支持各种预测的因素,回答诸如“什么因素影响销售?”或者“为什么销售额下降了?”,但不要强调一个人应该或可以做什么来获得更理想的结果。反事实解释方法解决干预问题,有些甚至考虑可行性,但没有人考虑它们是否适合实时应用,例如问答。在这里,我们通过引入一种新颖的模型不可知方法来解决这一差距,该方法提供了具体的、可行的变化,这些变化将影响给定实例中复杂黑盒AI模型的结果,并通过测量其实时性能和发现可实现变化的能力来评估其在现实世界中的效用。该方法使用关注的实例来生成高精度的解释,然后应用第二种方法来找到可实现的最小对比反事实解释(AMCC),同时将搜索限制在满足特定领域约束的修改上。使用广泛认可的数据集,我们评估了分类任务,以确定识别成功的反事实所需的频率和时间。对于准确率为90%的分类器,我们的算法在47%的情况下(81例中的38例)识别出AMCC解释,平均发现时间为80 ms。这些发现验证了该算法在快速生成AMCC解释方面的效率,适用于实时系统。AMCC方法提高了黑匣子人工智能模型的透明度,帮助个人评估补救策略或评估潜在结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.30
自引率
0.00%
发文量
0
审稿时长
7 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信