Xianming Huang , Yang Yan (闫旸) , Qiuyan Wang , Haoyu Pan , Hanning Chen , Xingguo Liu
{"title":"基于软流卷积和线性复杂度关注机制的图节点分类","authors":"Xianming Huang , Yang Yan (闫旸) , Qiuyan Wang , Haoyu Pan , Hanning Chen , Xingguo Liu","doi":"10.1016/j.jocs.2025.102628","DOIUrl":null,"url":null,"abstract":"<div><div>Traditional Graph Neural Networks (GNNs) typically use a message-passing mechanism to aggregate information from neighboring nodes. This message-passing mechanism is analogous to diffusing messages, often resulting in the homogenization of node features. GNNs also tend to be ineffective at capturing features from distant nodes and learning the global structure of the graph, which can reduce performance in node classification tasks. To address these issues, this paper proposes a novel model—Enhanced Soft-Flow Graph Convolutional Network (ESAGCN) based on a global attention mechanism. This model defines a learnable, parameterized phase angle that allows the edge directions between nodes to change continuously, enabling features to flow between nodes. Additionally, it incorporates the self-attention mechanism from Transformers to capture global information within the graph network, enhancing the global representation of nodes. We also employ a simple kernel trick to reduce the complexity of the model’s global attention mechanism to linear complexity. Experimental results demonstrate that the integration of global and local information in graphs is crucial for the learning process of GNNs, especially in directed graphs, significantly improving the accuracy of node classification.</div></div>","PeriodicalId":48907,"journal":{"name":"Journal of Computational Science","volume":"90 ","pages":"Article 102628"},"PeriodicalIF":3.1000,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Graph node classification with soft-flow convolution and linear-complexity attention mechanism\",\"authors\":\"Xianming Huang , Yang Yan (闫旸) , Qiuyan Wang , Haoyu Pan , Hanning Chen , Xingguo Liu\",\"doi\":\"10.1016/j.jocs.2025.102628\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Traditional Graph Neural Networks (GNNs) typically use a message-passing mechanism to aggregate information from neighboring nodes. This message-passing mechanism is analogous to diffusing messages, often resulting in the homogenization of node features. GNNs also tend to be ineffective at capturing features from distant nodes and learning the global structure of the graph, which can reduce performance in node classification tasks. To address these issues, this paper proposes a novel model—Enhanced Soft-Flow Graph Convolutional Network (ESAGCN) based on a global attention mechanism. This model defines a learnable, parameterized phase angle that allows the edge directions between nodes to change continuously, enabling features to flow between nodes. Additionally, it incorporates the self-attention mechanism from Transformers to capture global information within the graph network, enhancing the global representation of nodes. We also employ a simple kernel trick to reduce the complexity of the model’s global attention mechanism to linear complexity. Experimental results demonstrate that the integration of global and local information in graphs is crucial for the learning process of GNNs, especially in directed graphs, significantly improving the accuracy of node classification.</div></div>\",\"PeriodicalId\":48907,\"journal\":{\"name\":\"Journal of Computational Science\",\"volume\":\"90 \",\"pages\":\"Article 102628\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computational Science\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S187775032500105X\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Science","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S187775032500105X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Graph node classification with soft-flow convolution and linear-complexity attention mechanism
Traditional Graph Neural Networks (GNNs) typically use a message-passing mechanism to aggregate information from neighboring nodes. This message-passing mechanism is analogous to diffusing messages, often resulting in the homogenization of node features. GNNs also tend to be ineffective at capturing features from distant nodes and learning the global structure of the graph, which can reduce performance in node classification tasks. To address these issues, this paper proposes a novel model—Enhanced Soft-Flow Graph Convolutional Network (ESAGCN) based on a global attention mechanism. This model defines a learnable, parameterized phase angle that allows the edge directions between nodes to change continuously, enabling features to flow between nodes. Additionally, it incorporates the self-attention mechanism from Transformers to capture global information within the graph network, enhancing the global representation of nodes. We also employ a simple kernel trick to reduce the complexity of the model’s global attention mechanism to linear complexity. Experimental results demonstrate that the integration of global and local information in graphs is crucial for the learning process of GNNs, especially in directed graphs, significantly improving the accuracy of node classification.
期刊介绍:
Computational Science is a rapidly growing multi- and interdisciplinary field that uses advanced computing and data analysis to understand and solve complex problems. It has reached a level of predictive capability that now firmly complements the traditional pillars of experimentation and theory.
The recent advances in experimental techniques such as detectors, on-line sensor networks and high-resolution imaging techniques, have opened up new windows into physical and biological processes at many levels of detail. The resulting data explosion allows for detailed data driven modeling and simulation.
This new discipline in science combines computational thinking, modern computational methods, devices and collateral technologies to address problems far beyond the scope of traditional numerical methods.
Computational science typically unifies three distinct elements:
• Modeling, Algorithms and Simulations (e.g. numerical and non-numerical, discrete and continuous);
• Software developed to solve science (e.g., biological, physical, and social), engineering, medicine, and humanities problems;
• Computer and information science that develops and optimizes the advanced system hardware, software, networking, and data management components (e.g. problem solving environments).