Explaining Graph Neural Networks with mixed-integer programming

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Blake B. Gaines , Chunjiang Zhu , Jinbo Bi
{"title":"Explaining Graph Neural Networks with mixed-integer programming","authors":"Blake B. Gaines ,&nbsp;Chunjiang Zhu ,&nbsp;Jinbo Bi","doi":"10.1016/j.neucom.2025.130214","DOIUrl":null,"url":null,"abstract":"<div><div>Graph Neural Networks (GNNs) provide state-of-the-art graph learning performance, but their lack of transparency hinders our ability to understand and trust them, ultimately limiting the areas where they can be applied. Many methods exist to explain individual predictions made by GNNs, but there are fewer ways to gain more general insight into the patterns they have been trained to identify. Most existing methods for model-level GNN explanations attempt to generate graphs that exemplify these patterns, but the discreteness of graphs and the nonlinearity of deep GNNs make finding such graphs difficult. In this paper, we formulate the search for an explanatory graph as a mixed-integer programming (MIP) problem, in which decision variables specify the explanation graph and the objective function represents the quality of the graph as an explanation for a GNN’s predictions of an entire class in the dataset. This approach, which we call MIPExplainer, allows us to directly optimize over the discrete input space and find globally optimal solutions with a minimal number of hyperparameters. MIPExplainer outperforms existing methods in finding accurate and stable explanations on both synthetic and real-world datasets. Code is available at <span><span>https://github.com/blake-gaines/MIPExplainer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130214"},"PeriodicalIF":5.5000,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225008860","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Graph Neural Networks (GNNs) provide state-of-the-art graph learning performance, but their lack of transparency hinders our ability to understand and trust them, ultimately limiting the areas where they can be applied. Many methods exist to explain individual predictions made by GNNs, but there are fewer ways to gain more general insight into the patterns they have been trained to identify. Most existing methods for model-level GNN explanations attempt to generate graphs that exemplify these patterns, but the discreteness of graphs and the nonlinearity of deep GNNs make finding such graphs difficult. In this paper, we formulate the search for an explanatory graph as a mixed-integer programming (MIP) problem, in which decision variables specify the explanation graph and the objective function represents the quality of the graph as an explanation for a GNN’s predictions of an entire class in the dataset. This approach, which we call MIPExplainer, allows us to directly optimize over the discrete input space and find globally optimal solutions with a minimal number of hyperparameters. MIPExplainer outperforms existing methods in finding accurate and stable explanations on both synthetic and real-world datasets. Code is available at https://github.com/blake-gaines/MIPExplainer.
用混合整数规划解释图神经网络
图神经网络(gnn)提供了最先进的图学习性能,但它们缺乏透明度,阻碍了我们理解和信任它们的能力,最终限制了它们的应用领域。有许多方法可以解释gnn做出的个别预测,但很少有方法可以更全面地了解它们被训练来识别的模式。大多数现有的模型级GNN解释方法都试图生成图来举例说明这些模式,但是图的离散性和深度GNN的非线性使得找到这样的图变得困难。在本文中,我们将解释图的搜索表述为一个混合整数规划(MIP)问题,其中决策变量指定解释图,目标函数表示图的质量,作为GNN对数据集中整个类的预测的解释。这种方法,我们称之为MIPExplainer,允许我们直接在离散输入空间上进行优化,并使用最少数量的超参数找到全局最优解。MIPExplainer优于现有的方法,在合成和现实世界的数据集上找到准确和稳定的解释。代码可从https://github.com/blake-gaines/MIPExplainer获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信