Florian Bley , Sebastian Lapuschkin , Wojciech Samek , Grégoire Montavon
{"title":"Explaining predictive uncertainty by exposing second-order effects","authors":"Florian Bley , Sebastian Lapuschkin , Wojciech Samek , Grégoire Montavon","doi":"10.1016/j.patcog.2024.111171","DOIUrl":null,"url":null,"abstract":"<div><div>Explainable AI has brought transparency to complex ML black boxes, enabling us, in particular, to identify which features these models use to make predictions. So far, the question of how to explain predictive uncertainty, i.e., why a model ‘doubts’, has been scarcely studied. Our investigation reveals that predictive uncertainty is dominated by <em>second-order effects</em>, involving single features or product interactions between them. We contribute a new method for explaining predictive uncertainty based on these second-order effects. Computationally, our method reduces to a simple covariance computation over a collection of first-order explanations. Our method is generally applicable, allowing for turning common attribution techniques (LRP, Gradient<span><math><mrow><mspace></mspace><mo>×</mo><mspace></mspace></mrow></math></span>Input, etc.) into powerful second-order uncertainty explainers, which we call CovLRP, CovGI, etc. The accuracy of the explanations our method produces is demonstrated through systematic quantitative evaluations, and the overall usefulness of our method is demonstrated through two practical showcases.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111171"},"PeriodicalIF":7.5000,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320324009221","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Explainable AI has brought transparency to complex ML black boxes, enabling us, in particular, to identify which features these models use to make predictions. So far, the question of how to explain predictive uncertainty, i.e., why a model ‘doubts’, has been scarcely studied. Our investigation reveals that predictive uncertainty is dominated by second-order effects, involving single features or product interactions between them. We contribute a new method for explaining predictive uncertainty based on these second-order effects. Computationally, our method reduces to a simple covariance computation over a collection of first-order explanations. Our method is generally applicable, allowing for turning common attribution techniques (LRP, GradientInput, etc.) into powerful second-order uncertainty explainers, which we call CovLRP, CovGI, etc. The accuracy of the explanations our method produces is demonstrated through systematic quantitative evaluations, and the overall usefulness of our method is demonstrated through two practical showcases.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.