Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features

Thomas Schnake, Farnoush Rezaei Jafaria, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon, Klaus-Robert Müller
{"title":"Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features","authors":"Thomas Schnake, Farnoush Rezaei Jafaria, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon, Klaus-Robert Müller","doi":"arxiv-2408.17198","DOIUrl":null,"url":null,"abstract":"Explainable Artificial Intelligence (XAI) plays a crucial role in fostering\ntransparency and trust in AI systems, where traditional XAI approaches\ntypically offer one level of abstraction for explanations, often in the form of\nheatmaps highlighting single or multiple input features. However, we ask\nwhether abstract reasoning or problem-solving strategies of a model may also be\nrelevant, as these align more closely with how humans approach solutions to\nproblems. We propose a framework, called Symbolic XAI, that attributes\nrelevance to symbolic queries expressing logical relationships between input\nfeatures, thereby capturing the abstract reasoning behind a model's\npredictions. The methodology is built upon a simple yet general multi-order\ndecomposition of model predictions. This decomposition can be specified using\nhigher-order propagation-based relevance methods, such as GNN-LRP, or\nperturbation-based explanation methods commonly used in XAI. The effectiveness\nof our framework is demonstrated in the domains of natural language processing\n(NLP), vision, and quantum chemistry (QC), where abstract symbolic domain\nknowledge is abundant and of significant interest to users. The Symbolic XAI\nframework provides an understanding of the model's decision-making process that\nis both flexible for customization by the user and human-readable through\nlogical formulas.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.17198","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Explainable Artificial Intelligence (XAI) plays a crucial role in fostering transparency and trust in AI systems, where traditional XAI approaches typically offer one level of abstraction for explanations, often in the form of heatmaps highlighting single or multiple input features. However, we ask whether abstract reasoning or problem-solving strategies of a model may also be relevant, as these align more closely with how humans approach solutions to problems. We propose a framework, called Symbolic XAI, that attributes relevance to symbolic queries expressing logical relationships between input features, thereby capturing the abstract reasoning behind a model's predictions. The methodology is built upon a simple yet general multi-order decomposition of model predictions. This decomposition can be specified using higher-order propagation-based relevance methods, such as GNN-LRP, or perturbation-based explanation methods commonly used in XAI. The effectiveness of our framework is demonstrated in the domains of natural language processing (NLP), vision, and quantum chemistry (QC), where abstract symbolic domain knowledge is abundant and of significant interest to users. The Symbolic XAI framework provides an understanding of the model's decision-making process that is both flexible for customization by the user and human-readable through logical formulas.
迈向符号 XAI -- 通过人类可理解的特征之间的逻辑关系进行解释
可解释的人工智能(XAI)在促进人工智能系统的透明度和信任度方面发挥着至关重要的作用,传统的 XAI 方法通常提供一个抽象层次的解释,通常以热图的形式突出单个或多个输入特征。然而,我们不禁要问,模型的抽象推理或问题解决策略是否也与此有关,因为这些策略与人类解决问题的方法更为接近。我们提出了一个名为 "符号 XAI "的框架,将相关性归因于表达输入特征之间逻辑关系的符号查询,从而捕捉模型预测背后的抽象推理。该方法建立在对模型预测进行简单而通用的多阶分解的基础上。可以使用基于高阶传播的相关性方法(如 GNN-LRP)或 XAI 中常用的基于扰动的解释方法来指定这种分解。我们的框架在自然语言处理(NLP)、视觉和量子化学(QC)等领域的有效性得到了验证,在这些领域中,抽象的符号领域知识非常丰富,而且对用户具有重大意义。符号 XAI 框架提供了对模型决策过程的理解,这种理解既能灵活地由用户定制,又能通过逻辑公式让人类读懂。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信