Machine Learning post-hoc interpretability: a systematic mapping study

Carla Piazzon Ramos Vieira, L. A. Digiampietri
{"title":"Machine Learning post-hoc interpretability: a systematic mapping study","authors":"Carla Piazzon Ramos Vieira, L. A. Digiampietri","doi":"10.1145/3535511.3535512","DOIUrl":null,"url":null,"abstract":"Context: In the pre-algorithm world, humans and organizations made decisions in hiring and criminal sentencing. Nowadays, some of these decisions are entirely made or influenced by Machine Learning algorithms. Problem: Research is starting to reveal some troubling examples in which the reality of algorithmic decision-making runs the risk of replicating and even amplifying human biases. Along with that, most algorithmic decision systems are opaque and not interpretable - which makes it more difficult to detect potential biases and mitigate them. Solution: This paper reports an overview of the current literature on machine learning interpretability. IS Theory: This work was conceived under the aegis of the Sociotechnical theory. Artificial Intelligence systems can only be understood and improved if both ‘social’ and ‘technical’ aspects are brought together and treated as interdependent parts of a complex system. Method: The overview presented in this article has resulted from a systematic mapping study. Summary of Results: We find that, currently, the majority of XAI studies are not for end-users affected by the model but rather for data scientists who use explainability as a debugging tool. There is thus a gap in the quality assessment and deployment of interpretable methods. Contributions and Impact in the IS area: The main contribution of the paper is to serve as the motivating background for a series of challenges faced by XAI, such as combining different interpretable methods, evaluating interpretability, and building human-centered methods. We end by discussing concerns raised regarding explainability and presenting a series of questions that can serve as an agenda for future research in the field.","PeriodicalId":106528,"journal":{"name":"Proceedings of the XVIII Brazilian Symposium on Information Systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the XVIII Brazilian Symposium on Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3535511.3535512","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Context: In the pre-algorithm world, humans and organizations made decisions in hiring and criminal sentencing. Nowadays, some of these decisions are entirely made or influenced by Machine Learning algorithms. Problem: Research is starting to reveal some troubling examples in which the reality of algorithmic decision-making runs the risk of replicating and even amplifying human biases. Along with that, most algorithmic decision systems are opaque and not interpretable - which makes it more difficult to detect potential biases and mitigate them. Solution: This paper reports an overview of the current literature on machine learning interpretability. IS Theory: This work was conceived under the aegis of the Sociotechnical theory. Artificial Intelligence systems can only be understood and improved if both ‘social’ and ‘technical’ aspects are brought together and treated as interdependent parts of a complex system. Method: The overview presented in this article has resulted from a systematic mapping study. Summary of Results: We find that, currently, the majority of XAI studies are not for end-users affected by the model but rather for data scientists who use explainability as a debugging tool. There is thus a gap in the quality assessment and deployment of interpretable methods. Contributions and Impact in the IS area: The main contribution of the paper is to serve as the motivating background for a series of challenges faced by XAI, such as combining different interpretable methods, evaluating interpretability, and building human-centered methods. We end by discussing concerns raised regarding explainability and presenting a series of questions that can serve as an agenda for future research in the field.
机器学习事后可解释性:一个系统的映射研究
背景:在没有算法的世界里,人类和组织在招聘和刑事判决方面做出决定。如今,其中一些决策完全是由机器学习算法做出或影响的。问题:研究开始揭示一些令人不安的例子,在这些例子中,算法决策的现实存在复制甚至放大人类偏见的风险。除此之外,大多数算法决策系统都是不透明的,不可解释的,这使得发现潜在的偏见并减轻它们变得更加困难。解决方案:本文概述了当前关于机器学习可解释性的文献。IS理论:这项工作是在社会技术理论的支持下构思的。只有将“社会”和“技术”两个方面结合在一起,并作为一个复杂系统的相互依赖的部分来对待,人工智能系统才能被理解和改进。方法:本文的概述是通过系统的制图研究得出的。结果总结:我们发现,目前,大多数XAI研究不是针对受模型影响的最终用户,而是针对使用可解释性作为调试工具的数据科学家。因此,在可解释方法的质量评估和部署方面存在差距。在信息系统领域的贡献和影响:本文的主要贡献是为XAI面临的一系列挑战提供了激励背景,例如组合不同的可解释方法,评估可解释性以及构建以人为中心的方法。最后,我们讨论了对可解释性的关注,并提出了一系列问题,这些问题可以作为该领域未来研究的议程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信