Hailong Su;Zhipeng Li;Chang-An Yuan;Vladimir F. Filaretov;Deshuang Huang
{"title":"Variational Graph Neural Network Based on Normalizing Flows","authors":"Hailong Su;Zhipeng Li;Chang-An Yuan;Vladimir F. Filaretov;Deshuang Huang","doi":"10.1109/TSIPN.2025.3530350","DOIUrl":null,"url":null,"abstract":"Graph Neural Networks (GNNs) have recently achieved significant success in processing non-Euclidean datasets, such as social and protein-protein interaction networks. However, these datasets often contain inherent uncertainties, such as missing edges between nodes that are closely related. Variational Graph Auto-Encoders (VGAE) and other Bayesian methods have been proposed to address the problem. Unfortunately, they can't handle graph data effectively. VGAE, for instance, the posterior is assumed to be Gaussian, which can not match the true posterior well. To overcome these limitations, a normalizing flows(NFs) based on variational GNN is proposed in this paper. Unlike VGAE, our approach no longer assumes that the posterior distribution is a standard Gaussian distribution, but instead utilizes NFs to learn more complex and flexible distributions. NFs transforms simple distributions into complex ones through a series of invertible transformations. The transformed distribution is more flexible and can match the true distribution better. Specifically, in order to obtain the reversible transformer, inspired by RealNVP, affine transformations on graphs are used to map a simple distribution to a complex one. The transformed distribution can infer more complex distributions like skewed. We conduct experiments in the link prediction task and our method performs excellently compared with other methods and even achieves state-of-the-art results on some datasets.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"177-186"},"PeriodicalIF":3.0000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Signal and Information Processing over Networks","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10845188/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Graph Neural Networks (GNNs) have recently achieved significant success in processing non-Euclidean datasets, such as social and protein-protein interaction networks. However, these datasets often contain inherent uncertainties, such as missing edges between nodes that are closely related. Variational Graph Auto-Encoders (VGAE) and other Bayesian methods have been proposed to address the problem. Unfortunately, they can't handle graph data effectively. VGAE, for instance, the posterior is assumed to be Gaussian, which can not match the true posterior well. To overcome these limitations, a normalizing flows(NFs) based on variational GNN is proposed in this paper. Unlike VGAE, our approach no longer assumes that the posterior distribution is a standard Gaussian distribution, but instead utilizes NFs to learn more complex and flexible distributions. NFs transforms simple distributions into complex ones through a series of invertible transformations. The transformed distribution is more flexible and can match the true distribution better. Specifically, in order to obtain the reversible transformer, inspired by RealNVP, affine transformations on graphs are used to map a simple distribution to a complex one. The transformed distribution can infer more complex distributions like skewed. We conduct experiments in the link prediction task and our method performs excellently compared with other methods and even achieves state-of-the-art results on some datasets.
期刊介绍:
The IEEE Transactions on Signal and Information Processing over Networks publishes high-quality papers that extend the classical notions of processing of signals defined over vector spaces (e.g. time and space) to processing of signals and information (data) defined over networks, potentially dynamically varying. In signal processing over networks, the topology of the network may define structural relationships in the data, or may constrain processing of the data. Topics include distributed algorithms for filtering, detection, estimation, adaptation and learning, model selection, data fusion, and diffusion or evolution of information over such networks, and applications of distributed signal processing.