在XAI中导航信息性-压缩权衡

Ninell Oldenburg, Anders Søgaard
{"title":"在XAI中导航信息性-压缩权衡","authors":"Ninell Oldenburg,&nbsp;Anders Søgaard","doi":"10.1007/s43681-025-00733-5","DOIUrl":null,"url":null,"abstract":"<div><p>Every explanation faces a trade-off between informativeness and compression (Kinney and Lombrozo, 2022). On the one hand, we want to aim for as much detailed and correct information as possible, informativeness, on the other hand, we want to ensure that a human can process and comprehend the explanation, compression. Current methods in eXplainable AI (XAI) try to satisfy this trade-off <i>statically</i>, outputting <i>one</i> fixed, non-adjustable explanation that sits somewhere on the spectrum between informativeness and compression. However, some current XAI methods fail to meet the expectations of users and developers such that several failures have been reported in the literature which often come with user-specific knowledge gaps and good-enough understanding. In this work, we propose <i>Dynamic XAI</i> to navigate the trade-off interactively. We argue how this simple idea can help overcome the trade-off by eliminating gaps in user-specific understanding and preventing misunderstandings. We conclude by situating our approach within the broader ethical considerations around XAI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4925 - 4942"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00733-5.pdf","citationCount":"0","resultStr":"{\"title\":\"Navigating the informativeness-compression trade-off in XAI\",\"authors\":\"Ninell Oldenburg,&nbsp;Anders Søgaard\",\"doi\":\"10.1007/s43681-025-00733-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Every explanation faces a trade-off between informativeness and compression (Kinney and Lombrozo, 2022). On the one hand, we want to aim for as much detailed and correct information as possible, informativeness, on the other hand, we want to ensure that a human can process and comprehend the explanation, compression. Current methods in eXplainable AI (XAI) try to satisfy this trade-off <i>statically</i>, outputting <i>one</i> fixed, non-adjustable explanation that sits somewhere on the spectrum between informativeness and compression. However, some current XAI methods fail to meet the expectations of users and developers such that several failures have been reported in the literature which often come with user-specific knowledge gaps and good-enough understanding. In this work, we propose <i>Dynamic XAI</i> to navigate the trade-off interactively. We argue how this simple idea can help overcome the trade-off by eliminating gaps in user-specific understanding and preventing misunderstandings. We conclude by situating our approach within the broader ethical considerations around XAI.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"5 5\",\"pages\":\"4925 - 4942\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s43681-025-00733-5.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-025-00733-5\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00733-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

每一种解释都面临着信息性和压缩性之间的权衡(Kinney和Lombrozo, 2022)。一方面,我们希望获得尽可能详细和正确的信息,即信息量,另一方面,我们希望确保人类能够处理和理解解释,压缩。可解释AI (eXplainable AI, XAI)中的当前方法试图静态地满足这种权衡,输出一个固定的、不可调整的解释,它位于信息性和压缩之间。然而,当前的一些XAI方法不能满足用户和开发人员的期望,因此在文献中报告了一些失败,这些失败通常伴随着用户特定的知识差距和足够好的理解。在这项工作中,我们提出了动态XAI来交互式地进行权衡。我们讨论了这个简单的想法如何通过消除用户特定理解的差距和防止误解来帮助克服权衡。最后,我们将我们的方法置于围绕XAI的更广泛的道德考虑之中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Navigating the informativeness-compression trade-off in XAI

Every explanation faces a trade-off between informativeness and compression (Kinney and Lombrozo, 2022). On the one hand, we want to aim for as much detailed and correct information as possible, informativeness, on the other hand, we want to ensure that a human can process and comprehend the explanation, compression. Current methods in eXplainable AI (XAI) try to satisfy this trade-off statically, outputting one fixed, non-adjustable explanation that sits somewhere on the spectrum between informativeness and compression. However, some current XAI methods fail to meet the expectations of users and developers such that several failures have been reported in the literature which often come with user-specific knowledge gaps and good-enough understanding. In this work, we propose Dynamic XAI to navigate the trade-off interactively. We argue how this simple idea can help overcome the trade-off by eliminating gaps in user-specific understanding and preventing misunderstandings. We conclude by situating our approach within the broader ethical considerations around XAI.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信