Ming Gu , Gaoming Yang , Zhuonan Zheng , Meihan Liu , Haishuai Wang , Jiawei Chen , Sheng Zhou , Jiajun Bu
{"title":"Frequency Self-Adaptation Graph Neural Network for Unsupervised Graph Anomaly Detection","authors":"Ming Gu , Gaoming Yang , Zhuonan Zheng , Meihan Liu , Haishuai Wang , Jiawei Chen , Sheng Zhou , Jiajun Bu","doi":"10.1016/j.neunet.2025.107612","DOIUrl":null,"url":null,"abstract":"<div><div>Unsupervised Graph Anomaly Detection (UGAD) seeks to identify abnormal patterns in graphs without relying on labeled data. Among existing UGAD methods, Graph Neural Networks (GNNs) have played a critical role in learning effective representation for detection by filtering low-frequency graph signals. However, the presence of anomalies can shift the frequency band of graph signals toward higher frequencies, thereby violating the fundamental assumptions underlying GNNs and anomaly detection frameworks. To address this challenge, the design of novel graph filters has garnered significant attention, with recent approaches leveraging anomaly labels in a semi-supervised manner. Nonetheless, the absence of anomaly labels in real-world scenarios has rendered these methods impractical, leaving the question of how to design effective filters in an unsupervised manner largely unexplored. To bridge this gap, we propose a novel <strong>F</strong>requency Self-<strong>A</strong>daptation <strong>G</strong>raph Neural Network for Unsupervised Graph <strong>A</strong>nomaly <strong>D</strong>etection (<strong>FAGAD</strong>). Specifically, FAGAD adaptively fuses signals across multiple frequency bands using full-pass signals as a reference. It is optimized via a self-supervised learning approach, enabling the generation of effective representations for unsupervised graph anomaly detection. Experimental results demonstrate that FAGAD achieves state-of-the-art performance on both artificially injected datasets and real-world datasets. The code and datasets are publicly available at <span><span>https://github.com/eaglelab-zju/FAGAD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107612"},"PeriodicalIF":6.0000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025004927","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Unsupervised Graph Anomaly Detection (UGAD) seeks to identify abnormal patterns in graphs without relying on labeled data. Among existing UGAD methods, Graph Neural Networks (GNNs) have played a critical role in learning effective representation for detection by filtering low-frequency graph signals. However, the presence of anomalies can shift the frequency band of graph signals toward higher frequencies, thereby violating the fundamental assumptions underlying GNNs and anomaly detection frameworks. To address this challenge, the design of novel graph filters has garnered significant attention, with recent approaches leveraging anomaly labels in a semi-supervised manner. Nonetheless, the absence of anomaly labels in real-world scenarios has rendered these methods impractical, leaving the question of how to design effective filters in an unsupervised manner largely unexplored. To bridge this gap, we propose a novel Frequency Self-Adaptation Graph Neural Network for Unsupervised Graph Anomaly Detection (FAGAD). Specifically, FAGAD adaptively fuses signals across multiple frequency bands using full-pass signals as a reference. It is optimized via a self-supervised learning approach, enabling the generation of effective representations for unsupervised graph anomaly detection. Experimental results demonstrate that FAGAD achieves state-of-the-art performance on both artificially injected datasets and real-world datasets. The code and datasets are publicly available at https://github.com/eaglelab-zju/FAGAD.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.