{"title":"Topology-Based Node-Level Membership Inference Attacks on Graph Neural Networks","authors":"Faqian Guan;Tianqing Zhu;Wanlei Zhou;Philip S. Yu","doi":"10.1109/TBDATA.2025.3558855","DOIUrl":null,"url":null,"abstract":"Graph neural networks (GNNs) have obtained considerable attention due to their ability to leverage the inherent topological and node information present in graph data. While extensive research has been conducted on privacy attacks targeting machine learning models, the exploration of privacy risks associated with node-level membership inference attacks on GNNs remains relatively limited. GNNs learn representations that encapsulate valuable information about the nodes. These learned representations can be exploited by attackers to infer whether a specific node belongs to the training dataset, leading to the disclosure of sensitive information. The insidious nature of such privacy breaches often leads to an underestimation of the associated risks. Furthermore, the inherent challenges posed by node membership inference attacks make it difficult to develop effective attack models for GNNs that can successfully infer node membership. We propose a more efficient approach that specifically targets node-level membership inference attacks on GNNs. Initially, we combine nodes and their respective neighbors to carry out node membership inference attacks. To address the challenge of variable-length features arising from the differing number of neighboring nodes, we introduce an effective feature processing strategy. Furthermore, we propose two strategies: multiple training of shadow models and random selection of non-membership data, to enhance the performance of the attack model. We empirically evaluate the efficacy of our proposed method using three benchmark datasets. Additionally, we explore two potential defense mechanisms against node-level membership inference attacks.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2809-2826"},"PeriodicalIF":5.7000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Big Data","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10955491/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Graph neural networks (GNNs) have obtained considerable attention due to their ability to leverage the inherent topological and node information present in graph data. While extensive research has been conducted on privacy attacks targeting machine learning models, the exploration of privacy risks associated with node-level membership inference attacks on GNNs remains relatively limited. GNNs learn representations that encapsulate valuable information about the nodes. These learned representations can be exploited by attackers to infer whether a specific node belongs to the training dataset, leading to the disclosure of sensitive information. The insidious nature of such privacy breaches often leads to an underestimation of the associated risks. Furthermore, the inherent challenges posed by node membership inference attacks make it difficult to develop effective attack models for GNNs that can successfully infer node membership. We propose a more efficient approach that specifically targets node-level membership inference attacks on GNNs. Initially, we combine nodes and their respective neighbors to carry out node membership inference attacks. To address the challenge of variable-length features arising from the differing number of neighboring nodes, we introduce an effective feature processing strategy. Furthermore, we propose two strategies: multiple training of shadow models and random selection of non-membership data, to enhance the performance of the attack model. We empirically evaluate the efficacy of our proposed method using three benchmark datasets. Additionally, we explore two potential defense mechanisms against node-level membership inference attacks.
期刊介绍:
The IEEE Transactions on Big Data publishes peer-reviewed articles focusing on big data. These articles present innovative research ideas and application results across disciplines, including novel theories, algorithms, and applications. Research areas cover a wide range, such as big data analytics, visualization, curation, management, semantics, infrastructure, standards, performance analysis, intelligence extraction, scientific discovery, security, privacy, and legal issues specific to big data. The journal also prioritizes applications of big data in fields generating massive datasets.