Topology-Based Node-Level Membership Inference Attacks on Graph Neural Networks

IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Faqian Guan;Tianqing Zhu;Wanlei Zhou;Philip S. Yu
{"title":"Topology-Based Node-Level Membership Inference Attacks on Graph Neural Networks","authors":"Faqian Guan;Tianqing Zhu;Wanlei Zhou;Philip S. Yu","doi":"10.1109/TBDATA.2025.3558855","DOIUrl":null,"url":null,"abstract":"Graph neural networks (GNNs) have obtained considerable attention due to their ability to leverage the inherent topological and node information present in graph data. While extensive research has been conducted on privacy attacks targeting machine learning models, the exploration of privacy risks associated with node-level membership inference attacks on GNNs remains relatively limited. GNNs learn representations that encapsulate valuable information about the nodes. These learned representations can be exploited by attackers to infer whether a specific node belongs to the training dataset, leading to the disclosure of sensitive information. The insidious nature of such privacy breaches often leads to an underestimation of the associated risks. Furthermore, the inherent challenges posed by node membership inference attacks make it difficult to develop effective attack models for GNNs that can successfully infer node membership. We propose a more efficient approach that specifically targets node-level membership inference attacks on GNNs. Initially, we combine nodes and their respective neighbors to carry out node membership inference attacks. To address the challenge of variable-length features arising from the differing number of neighboring nodes, we introduce an effective feature processing strategy. Furthermore, we propose two strategies: multiple training of shadow models and random selection of non-membership data, to enhance the performance of the attack model. We empirically evaluate the efficacy of our proposed method using three benchmark datasets. Additionally, we explore two potential defense mechanisms against node-level membership inference attacks.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2809-2826"},"PeriodicalIF":5.7000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Big Data","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10955491/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Graph neural networks (GNNs) have obtained considerable attention due to their ability to leverage the inherent topological and node information present in graph data. While extensive research has been conducted on privacy attacks targeting machine learning models, the exploration of privacy risks associated with node-level membership inference attacks on GNNs remains relatively limited. GNNs learn representations that encapsulate valuable information about the nodes. These learned representations can be exploited by attackers to infer whether a specific node belongs to the training dataset, leading to the disclosure of sensitive information. The insidious nature of such privacy breaches often leads to an underestimation of the associated risks. Furthermore, the inherent challenges posed by node membership inference attacks make it difficult to develop effective attack models for GNNs that can successfully infer node membership. We propose a more efficient approach that specifically targets node-level membership inference attacks on GNNs. Initially, we combine nodes and their respective neighbors to carry out node membership inference attacks. To address the challenge of variable-length features arising from the differing number of neighboring nodes, we introduce an effective feature processing strategy. Furthermore, we propose two strategies: multiple training of shadow models and random selection of non-membership data, to enhance the performance of the attack model. We empirically evaluate the efficacy of our proposed method using three benchmark datasets. Additionally, we explore two potential defense mechanisms against node-level membership inference attacks.
基于拓扑的图神经网络节点级隶属推理攻击
图神经网络(gnn)由于能够利用图数据中存在的固有拓扑和节点信息而获得了相当大的关注。虽然针对机器学习模型的隐私攻击进行了广泛的研究,但对gnn上节点级成员推理攻击相关的隐私风险的探索仍然相对有限。gnn学习封装有关节点的有价值信息的表示。这些学习到的表示可以被攻击者利用来推断特定节点是否属于训练数据集,从而导致敏感信息的泄露。此类隐私泄露的隐蔽性往往导致对相关风险的低估。此外,节点隶属度推理攻击所带来的固有挑战使得很难开发出能够成功推断节点隶属度的gnn有效攻击模型。我们提出了一种更有效的方法,专门针对gnn的节点级成员推理攻击。最初,我们将节点和它们各自的邻居结合起来进行节点隶属推理攻击。为了解决因相邻节点数量不同而产生的变长特征的挑战,我们引入了一种有效的特征处理策略。为了提高攻击模型的性能,我们提出了多重训练阴影模型和随机选择非隶属性数据两种策略。我们使用三个基准数据集对我们提出的方法的有效性进行了实证评估。此外,我们还探讨了针对节点级成员推理攻击的两种潜在防御机制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
11.80
自引率
2.80%
发文量
114
期刊介绍: The IEEE Transactions on Big Data publishes peer-reviewed articles focusing on big data. These articles present innovative research ideas and application results across disciplines, including novel theories, algorithms, and applications. Research areas cover a wide range, such as big data analytics, visualization, curation, management, semantics, infrastructure, standards, performance analysis, intelligence extraction, scientific discovery, security, privacy, and legal issues specific to big data. The journal also prioritizes applications of big data in fields generating massive datasets.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信