{"title":"Graph Spiking Attention Network: Sparsity, Efficiency and Robustness","authors":"Beibei Wang;Bo Jiang;Jin Tang;Lu Bai;Bin Luo","doi":"10.1109/TPAMI.2025.3593912","DOIUrl":null,"url":null,"abstract":"Existing Graph Attention Networks (GATs) generally adopt the self-attention mechanism to learn graph edge attention, which usually return dense attention coefficients over all neighbors and thus are prone to be sensitive to graph edge noises. To overcome this problem, sparse GATs are desirable and have garnered increasing interest in recent years. However, existing sparse GATs usually suffer from <italic>high training complexity</i> and are also <italic>not straightforward</i> for inductive learning tasks. To address these issues, we propose to learn <bold>sparse</b> GATs by exploiting spiking neuron (SN) mechanism, termed Graph Spiking Attention (GSAT). Specifically, it is known that spiking neuron can perform inexpensive information processing by transmitting the input data into discrete spike trains and return sparse outputs. Inspired by it, this work attempts to exploit spiking neuron to learn sparse attention coefficients, resulting in edge-sparsified graph for GNNs. Therefore, GSAT can perform message passing on the selective neighbors naturally, which makes GSAT perform compactly and robustly w.r.t graph noises. Moreover, GSAT can be used straightforwardly for inductive learning tasks. Extensive experiments on both transductive and inductive tasks demonstrate the <italic>effectiveness</i>, <italic>robustness</i> and <italic>efficiency</i> of GSAT.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 11","pages":"10862-10869"},"PeriodicalIF":18.6000,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11104926/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Existing Graph Attention Networks (GATs) generally adopt the self-attention mechanism to learn graph edge attention, which usually return dense attention coefficients over all neighbors and thus are prone to be sensitive to graph edge noises. To overcome this problem, sparse GATs are desirable and have garnered increasing interest in recent years. However, existing sparse GATs usually suffer from high training complexity and are also not straightforward for inductive learning tasks. To address these issues, we propose to learn sparse GATs by exploiting spiking neuron (SN) mechanism, termed Graph Spiking Attention (GSAT). Specifically, it is known that spiking neuron can perform inexpensive information processing by transmitting the input data into discrete spike trains and return sparse outputs. Inspired by it, this work attempts to exploit spiking neuron to learn sparse attention coefficients, resulting in edge-sparsified graph for GNNs. Therefore, GSAT can perform message passing on the selective neighbors naturally, which makes GSAT perform compactly and robustly w.r.t graph noises. Moreover, GSAT can be used straightforwardly for inductive learning tasks. Extensive experiments on both transductive and inductive tasks demonstrate the effectiveness, robustness and efficiency of GSAT.