{"title":"Navigating the informativeness-compression trade-off in XAI","authors":"Ninell Oldenburg, Anders Søgaard","doi":"10.1007/s43681-025-00733-5","DOIUrl":null,"url":null,"abstract":"<div><p>Every explanation faces a trade-off between informativeness and compression (Kinney and Lombrozo, 2022). On the one hand, we want to aim for as much detailed and correct information as possible, informativeness, on the other hand, we want to ensure that a human can process and comprehend the explanation, compression. Current methods in eXplainable AI (XAI) try to satisfy this trade-off <i>statically</i>, outputting <i>one</i> fixed, non-adjustable explanation that sits somewhere on the spectrum between informativeness and compression. However, some current XAI methods fail to meet the expectations of users and developers such that several failures have been reported in the literature which often come with user-specific knowledge gaps and good-enough understanding. In this work, we propose <i>Dynamic XAI</i> to navigate the trade-off interactively. We argue how this simple idea can help overcome the trade-off by eliminating gaps in user-specific understanding and preventing misunderstandings. We conclude by situating our approach within the broader ethical considerations around XAI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4925 - 4942"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00733-5.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00733-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Every explanation faces a trade-off between informativeness and compression (Kinney and Lombrozo, 2022). On the one hand, we want to aim for as much detailed and correct information as possible, informativeness, on the other hand, we want to ensure that a human can process and comprehend the explanation, compression. Current methods in eXplainable AI (XAI) try to satisfy this trade-off statically, outputting one fixed, non-adjustable explanation that sits somewhere on the spectrum between informativeness and compression. However, some current XAI methods fail to meet the expectations of users and developers such that several failures have been reported in the literature which often come with user-specific knowledge gaps and good-enough understanding. In this work, we propose Dynamic XAI to navigate the trade-off interactively. We argue how this simple idea can help overcome the trade-off by eliminating gaps in user-specific understanding and preventing misunderstandings. We conclude by situating our approach within the broader ethical considerations around XAI.