{"title":"Machine Learning post-hoc interpretability: a systematic mapping study","authors":"Carla Piazzon Ramos Vieira, L. A. Digiampietri","doi":"10.1145/3535511.3535512","DOIUrl":null,"url":null,"abstract":"Context: In the pre-algorithm world, humans and organizations made decisions in hiring and criminal sentencing. Nowadays, some of these decisions are entirely made or influenced by Machine Learning algorithms. Problem: Research is starting to reveal some troubling examples in which the reality of algorithmic decision-making runs the risk of replicating and even amplifying human biases. Along with that, most algorithmic decision systems are opaque and not interpretable - which makes it more difficult to detect potential biases and mitigate them. Solution: This paper reports an overview of the current literature on machine learning interpretability. IS Theory: This work was conceived under the aegis of the Sociotechnical theory. Artificial Intelligence systems can only be understood and improved if both ‘social’ and ‘technical’ aspects are brought together and treated as interdependent parts of a complex system. Method: The overview presented in this article has resulted from a systematic mapping study. Summary of Results: We find that, currently, the majority of XAI studies are not for end-users affected by the model but rather for data scientists who use explainability as a debugging tool. There is thus a gap in the quality assessment and deployment of interpretable methods. Contributions and Impact in the IS area: The main contribution of the paper is to serve as the motivating background for a series of challenges faced by XAI, such as combining different interpretable methods, evaluating interpretability, and building human-centered methods. We end by discussing concerns raised regarding explainability and presenting a series of questions that can serve as an agenda for future research in the field.","PeriodicalId":106528,"journal":{"name":"Proceedings of the XVIII Brazilian Symposium on Information Systems","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the XVIII Brazilian Symposium on Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3535511.3535512","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Context: In the pre-algorithm world, humans and organizations made decisions in hiring and criminal sentencing. Nowadays, some of these decisions are entirely made or influenced by Machine Learning algorithms. Problem: Research is starting to reveal some troubling examples in which the reality of algorithmic decision-making runs the risk of replicating and even amplifying human biases. Along with that, most algorithmic decision systems are opaque and not interpretable - which makes it more difficult to detect potential biases and mitigate them. Solution: This paper reports an overview of the current literature on machine learning interpretability. IS Theory: This work was conceived under the aegis of the Sociotechnical theory. Artificial Intelligence systems can only be understood and improved if both ‘social’ and ‘technical’ aspects are brought together and treated as interdependent parts of a complex system. Method: The overview presented in this article has resulted from a systematic mapping study. Summary of Results: We find that, currently, the majority of XAI studies are not for end-users affected by the model but rather for data scientists who use explainability as a debugging tool. There is thus a gap in the quality assessment and deployment of interpretable methods. Contributions and Impact in the IS area: The main contribution of the paper is to serve as the motivating background for a series of challenges faced by XAI, such as combining different interpretable methods, evaluating interpretability, and building human-centered methods. We end by discussing concerns raised regarding explainability and presenting a series of questions that can serve as an agenda for future research in the field.