Ahmed Begga, Francisco Escolano, Miguel Ángel Lozano
{"title":"通过扩散跳跃 GNN 实现异嗜系统中的节点分类","authors":"Ahmed Begga, Francisco Escolano, Miguel Ángel Lozano","doi":"10.1016/j.neunet.2024.106830","DOIUrl":null,"url":null,"abstract":"<div><div>In the ideal (homophilic) regime of vanilla GNNs, nodes belonging to the same community have the same label: most of the nodes are harmonic (their unknown labels result from averaging those of their neighbors given some labeled nodes). In other words, heterophily (when neighboring nodes have different labels) can be seen as a “loss of harmonicity”.</div><div>In this paper, we define “structural heterophily” in terms of the ratio between the harmonicity of the network (Laplacian Dirichlet energy) and the harmonicity of its homophilic version (the so-called “ground” energy). This new measure inspires a novel GNN model (Diffusion-Jump GNN) that bypasses structural heterophily by “jumping” through the network in order to relate distant homologs. However, instead of using hops as standard High-Order (HO) GNNs (MixHop) do, our jumps are rooted in a structural well-known metric: the diffusion distance.</div><div>Computing the “diffusion matrix” (DM) is the core of this method. Our main contribution is that we learn both the diffusion distances and the “structural filters” derived from them. Since diffusion distances have a spectral interpretation, we learn orthogonal approximations of the Laplacian eigenvectors while the prediction loss is minimized. This leads to an interplay between a Dirichlet loss, which captures low-frequency content, and a prediction loss which refines that content leading to empirical eigenfunctions. Finally, our experimental results show that we are very competitive with the State-Of-the-Art (SOTA) both in homophilic and heterophilic datasets, even in large graphs.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106830"},"PeriodicalIF":6.0000,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Node classification in the heterophilic regime via diffusion-jump GNNs\",\"authors\":\"Ahmed Begga, Francisco Escolano, Miguel Ángel Lozano\",\"doi\":\"10.1016/j.neunet.2024.106830\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In the ideal (homophilic) regime of vanilla GNNs, nodes belonging to the same community have the same label: most of the nodes are harmonic (their unknown labels result from averaging those of their neighbors given some labeled nodes). In other words, heterophily (when neighboring nodes have different labels) can be seen as a “loss of harmonicity”.</div><div>In this paper, we define “structural heterophily” in terms of the ratio between the harmonicity of the network (Laplacian Dirichlet energy) and the harmonicity of its homophilic version (the so-called “ground” energy). This new measure inspires a novel GNN model (Diffusion-Jump GNN) that bypasses structural heterophily by “jumping” through the network in order to relate distant homologs. However, instead of using hops as standard High-Order (HO) GNNs (MixHop) do, our jumps are rooted in a structural well-known metric: the diffusion distance.</div><div>Computing the “diffusion matrix” (DM) is the core of this method. Our main contribution is that we learn both the diffusion distances and the “structural filters” derived from them. Since diffusion distances have a spectral interpretation, we learn orthogonal approximations of the Laplacian eigenvectors while the prediction loss is minimized. This leads to an interplay between a Dirichlet loss, which captures low-frequency content, and a prediction loss which refines that content leading to empirical eigenfunctions. Finally, our experimental results show that we are very competitive with the State-Of-the-Art (SOTA) both in homophilic and heterophilic datasets, even in large graphs.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"181 \",\"pages\":\"Article 106830\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2024-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608024007548\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024007548","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Node classification in the heterophilic regime via diffusion-jump GNNs
In the ideal (homophilic) regime of vanilla GNNs, nodes belonging to the same community have the same label: most of the nodes are harmonic (their unknown labels result from averaging those of their neighbors given some labeled nodes). In other words, heterophily (when neighboring nodes have different labels) can be seen as a “loss of harmonicity”.
In this paper, we define “structural heterophily” in terms of the ratio between the harmonicity of the network (Laplacian Dirichlet energy) and the harmonicity of its homophilic version (the so-called “ground” energy). This new measure inspires a novel GNN model (Diffusion-Jump GNN) that bypasses structural heterophily by “jumping” through the network in order to relate distant homologs. However, instead of using hops as standard High-Order (HO) GNNs (MixHop) do, our jumps are rooted in a structural well-known metric: the diffusion distance.
Computing the “diffusion matrix” (DM) is the core of this method. Our main contribution is that we learn both the diffusion distances and the “structural filters” derived from them. Since diffusion distances have a spectral interpretation, we learn orthogonal approximations of the Laplacian eigenvectors while the prediction loss is minimized. This leads to an interplay between a Dirichlet loss, which captures low-frequency content, and a prediction loss which refines that content leading to empirical eigenfunctions. Finally, our experimental results show that we are very competitive with the State-Of-the-Art (SOTA) both in homophilic and heterophilic datasets, even in large graphs.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.