图峰值注意网络:稀疏性、效率和鲁棒性

IF 18.6
Beibei Wang;Bo Jiang;Jin Tang;Lu Bai;Bin Luo
{"title":"图峰值注意网络:稀疏性、效率和鲁棒性","authors":"Beibei Wang;Bo Jiang;Jin Tang;Lu Bai;Bin Luo","doi":"10.1109/TPAMI.2025.3593912","DOIUrl":null,"url":null,"abstract":"Existing Graph Attention Networks (GATs) generally adopt the self-attention mechanism to learn graph edge attention, which usually return dense attention coefficients over all neighbors and thus are prone to be sensitive to graph edge noises. To overcome this problem, sparse GATs are desirable and have garnered increasing interest in recent years. However, existing sparse GATs usually suffer from <italic>high training complexity</i> and are also <italic>not straightforward</i> for inductive learning tasks. To address these issues, we propose to learn <bold>sparse</b> GATs by exploiting spiking neuron (SN) mechanism, termed Graph Spiking Attention (GSAT). Specifically, it is known that spiking neuron can perform inexpensive information processing by transmitting the input data into discrete spike trains and return sparse outputs. Inspired by it, this work attempts to exploit spiking neuron to learn sparse attention coefficients, resulting in edge-sparsified graph for GNNs. Therefore, GSAT can perform message passing on the selective neighbors naturally, which makes GSAT perform compactly and robustly w.r.t graph noises. Moreover, GSAT can be used straightforwardly for inductive learning tasks. Extensive experiments on both transductive and inductive tasks demonstrate the <italic>effectiveness</i>, <italic>robustness</i> and <italic>efficiency</i> of GSAT.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 11","pages":"10862-10869"},"PeriodicalIF":18.6000,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Graph Spiking Attention Network: Sparsity, Efficiency and Robustness\",\"authors\":\"Beibei Wang;Bo Jiang;Jin Tang;Lu Bai;Bin Luo\",\"doi\":\"10.1109/TPAMI.2025.3593912\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Existing Graph Attention Networks (GATs) generally adopt the self-attention mechanism to learn graph edge attention, which usually return dense attention coefficients over all neighbors and thus are prone to be sensitive to graph edge noises. To overcome this problem, sparse GATs are desirable and have garnered increasing interest in recent years. However, existing sparse GATs usually suffer from <italic>high training complexity</i> and are also <italic>not straightforward</i> for inductive learning tasks. To address these issues, we propose to learn <bold>sparse</b> GATs by exploiting spiking neuron (SN) mechanism, termed Graph Spiking Attention (GSAT). Specifically, it is known that spiking neuron can perform inexpensive information processing by transmitting the input data into discrete spike trains and return sparse outputs. Inspired by it, this work attempts to exploit spiking neuron to learn sparse attention coefficients, resulting in edge-sparsified graph for GNNs. Therefore, GSAT can perform message passing on the selective neighbors naturally, which makes GSAT perform compactly and robustly w.r.t graph noises. Moreover, GSAT can be used straightforwardly for inductive learning tasks. Extensive experiments on both transductive and inductive tasks demonstrate the <italic>effectiveness</i>, <italic>robustness</i> and <italic>efficiency</i> of GSAT.\",\"PeriodicalId\":94034,\"journal\":{\"name\":\"IEEE transactions on pattern analysis and machine intelligence\",\"volume\":\"47 11\",\"pages\":\"10862-10869\"},\"PeriodicalIF\":18.6000,\"publicationDate\":\"2025-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on pattern analysis and machine intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11104926/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11104926/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

现有的图注意网络(GATs)一般采用自注意机制来学习图边注意,通常对所有邻居返回密集的注意系数,容易对图边噪声敏感。为了克服这个问题,稀疏的服务贸易总协定是可取的,并且近年来引起了越来越多的兴趣。然而,现有的稀疏GATs通常存在训练复杂度高的问题,并且对于归纳学习任务也不直接。为了解决这些问题,我们提出利用峰值神经元(SN)机制来学习稀疏GATs,称为图峰值注意(GSAT)。具体而言,已知尖峰神经元可以通过将输入数据传输到离散的尖峰序列并返回稀疏的输出来进行低成本的信息处理。受其启发,本文尝试利用尖峰神经元学习稀疏注意系数,得到gnn的边缘稀疏化图。因此,GSAT可以自然地在选择性邻居上进行消息传递,这使得GSAT能够实现紧凑和鲁棒的w.r.t图噪声。此外,GSAT可以直接用于归纳学习任务。在转导和归纳任务上的大量实验证明了GSAT的有效性、鲁棒性和高效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Graph Spiking Attention Network: Sparsity, Efficiency and Robustness
Existing Graph Attention Networks (GATs) generally adopt the self-attention mechanism to learn graph edge attention, which usually return dense attention coefficients over all neighbors and thus are prone to be sensitive to graph edge noises. To overcome this problem, sparse GATs are desirable and have garnered increasing interest in recent years. However, existing sparse GATs usually suffer from high training complexity and are also not straightforward for inductive learning tasks. To address these issues, we propose to learn sparse GATs by exploiting spiking neuron (SN) mechanism, termed Graph Spiking Attention (GSAT). Specifically, it is known that spiking neuron can perform inexpensive information processing by transmitting the input data into discrete spike trains and return sparse outputs. Inspired by it, this work attempts to exploit spiking neuron to learn sparse attention coefficients, resulting in edge-sparsified graph for GNNs. Therefore, GSAT can perform message passing on the selective neighbors naturally, which makes GSAT perform compactly and robustly w.r.t graph noises. Moreover, GSAT can be used straightforwardly for inductive learning tasks. Extensive experiments on both transductive and inductive tasks demonstrate the effectiveness, robustness and efficiency of GSAT.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信