Abstracting general syntax for XAI after decomposing explanation sub-components

IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Stephen Wormald, Matheus Kunzler Maldaner, Kristian D. O’Connor, Olivia P. Dizon-Paradis, Damon L. Woodard
{"title":"Abstracting general syntax for XAI after decomposing explanation sub-components","authors":"Stephen Wormald,&nbsp;Matheus Kunzler Maldaner,&nbsp;Kristian D. O’Connor,&nbsp;Olivia P. Dizon-Paradis,&nbsp;Damon L. Woodard","doi":"10.1007/s10462-025-11216-8","DOIUrl":null,"url":null,"abstract":"<div><p>Healthcare providers, policymakers, and defense contractors need to understand many types of machine learning model behaviors. While eXplainable Artificial Intelligence (XAI) provides tools for interpreting these behaviors, few frameworks, surveys, and taxonomies produce succinct yet general notation to help researchers and practitioners describe their explainability needs and quantify whether these needs are met. Such quantified comparisons could help individuals rank XAI methods by their relevance to use-cases, select explanations best suited for individual users, and evaluate what explanations are most useful for describing model behaviors. This paper collects, decomposes, and abstracts subcomponents of common XAI methods to identify a <i>mathematically grounded</i> syntax that <i>applies generally</i> to describing <i>modern and future</i> explanation types while remaining <i>useful for discovering novel XAI methods</i>. The resulting syntax, introduced as the <i>Qi</i>-Framework, generally defines explanation types in terms of the information being explained, their utility to inspectors, and the methods and information used to produce explanations. Just as programming languages define syntax to structure, simplify, and standardize software development, so too the <i>Qi</i>-Framework acts as a common language to help researchers and practitioners select, compare, and discover XAI methods. Derivative works may extend and implement the <i>Qi</i>-Framework to develop a more rigorous science for interpretable machine learning and inspire collaborative competition across XAI research.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 8","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11216-8.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11216-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Healthcare providers, policymakers, and defense contractors need to understand many types of machine learning model behaviors. While eXplainable Artificial Intelligence (XAI) provides tools for interpreting these behaviors, few frameworks, surveys, and taxonomies produce succinct yet general notation to help researchers and practitioners describe their explainability needs and quantify whether these needs are met. Such quantified comparisons could help individuals rank XAI methods by their relevance to use-cases, select explanations best suited for individual users, and evaluate what explanations are most useful for describing model behaviors. This paper collects, decomposes, and abstracts subcomponents of common XAI methods to identify a mathematically grounded syntax that applies generally to describing modern and future explanation types while remaining useful for discovering novel XAI methods. The resulting syntax, introduced as the Qi-Framework, generally defines explanation types in terms of the information being explained, their utility to inspectors, and the methods and information used to produce explanations. Just as programming languages define syntax to structure, simplify, and standardize software development, so too the Qi-Framework acts as a common language to help researchers and practitioners select, compare, and discover XAI methods. Derivative works may extend and implement the Qi-Framework to develop a more rigorous science for interpretable machine learning and inspire collaborative competition across XAI research.

分解解释子组件后,为XAI抽象通用语法
医疗保健提供者、政策制定者和国防承包商需要了解许多类型的机器学习模型行为。虽然可解释的人工智能(XAI)提供了解释这些行为的工具,但很少有框架、调查和分类法产生简洁而通用的符号来帮助研究人员和实践者描述他们的可解释性需求,并量化这些需求是否得到满足。这种量化的比较可以帮助个人根据它们与用例的相关性对XAI方法进行排序,选择最适合个人用户的解释,并评估哪些解释对描述模型行为最有用。本文收集、分解和抽象常见XAI方法的子组件,以确定一种基于数学的语法,这种语法通常适用于描述现代和未来的解释类型,同时对发现新的XAI方法仍然有用。作为qi框架引入的结果语法,通常根据被解释的信息、它们对检查人员的效用以及用于产生解释的方法和信息来定义解释类型。正如编程语言定义语法来构建、简化和标准化软件开发一样,qi框架也作为一种公共语言来帮助研究人员和实践者选择、比较和发现XAI方法。衍生作品可以扩展和实施qi框架,为可解释的机器学习开发更严格的科学,并激发跨XAI研究的协作竞争。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信