DPRO-GNN:桥接差分隐私和保护隐私的高级优化图学习

IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS
Yanan Bai , Liji Xiao , Hongbo Zhao , Xiaoyu Shi
{"title":"DPRO-GNN:桥接差分隐私和保护隐私的高级优化图学习","authors":"Yanan Bai ,&nbsp;Liji Xiao ,&nbsp;Hongbo Zhao ,&nbsp;Xiaoyu Shi","doi":"10.1016/j.ins.2025.122695","DOIUrl":null,"url":null,"abstract":"<div><div>Graph Neural Networks (GNNs) have demonstrated exceptional performance in modeling structured data, yet their application in sensitive domains inevitably raises privacy concerns. Existing Differentially Private GNN (DPGNN) frameworks primarily rely on Differentially Private Stochastic Gradient Descent (DP-SGD) to enforce privacy guarantees. However, DP-SGD inherits its inherent limitations, such as training instability and slow convergence, which are particularly problematic for complex graph learning tasks. Although advanced optimizers like Ranger offer a promising alternative, their naive integration into DPGNN frameworks introduces bias, specifically in the second-moment estimation, due to the additive noise required for DP. To address this challenge, we propose the Differentially Private Ranger-Optimized Graph Neural Network (DPRO-GNN) to protect users’ sensitive data when training the GNN tasks. To mitigate DP noise and capture multi-scale structure, DPRO-GNN applies hierarchical pooling to aggregate nodes into progressively coarser subgraphs, yielding robust, multi-resolution embeddings. Meanwhile, our approach introduces DP-RangerBC, a bias-corrected variant of the Ranger optimizer that mitigates the noise-induced bias in second-order moment estimation, thereby enabling more stable and efficient training under DP constraints. Furthermore, the theoretical analysis of DPRO-GNN, including its correctness and security, is also provided. Extensive experiments on real-world datasets demonstrate that DPRO-GNN achieves superior performance in terms of classification accuracy and convergence speed, compared to state-of-the-art DPGNN methods. The code of DPRO-GNN is available at the following link:<span><span>https://github.com/Silbermondlel/DPRO-GNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"723 ","pages":"Article 122695"},"PeriodicalIF":6.8000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DPRO-GNN: Bridging differential privacy and advanced optimization for privacy-preserving graph learning\",\"authors\":\"Yanan Bai ,&nbsp;Liji Xiao ,&nbsp;Hongbo Zhao ,&nbsp;Xiaoyu Shi\",\"doi\":\"10.1016/j.ins.2025.122695\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Graph Neural Networks (GNNs) have demonstrated exceptional performance in modeling structured data, yet their application in sensitive domains inevitably raises privacy concerns. Existing Differentially Private GNN (DPGNN) frameworks primarily rely on Differentially Private Stochastic Gradient Descent (DP-SGD) to enforce privacy guarantees. However, DP-SGD inherits its inherent limitations, such as training instability and slow convergence, which are particularly problematic for complex graph learning tasks. Although advanced optimizers like Ranger offer a promising alternative, their naive integration into DPGNN frameworks introduces bias, specifically in the second-moment estimation, due to the additive noise required for DP. To address this challenge, we propose the Differentially Private Ranger-Optimized Graph Neural Network (DPRO-GNN) to protect users’ sensitive data when training the GNN tasks. To mitigate DP noise and capture multi-scale structure, DPRO-GNN applies hierarchical pooling to aggregate nodes into progressively coarser subgraphs, yielding robust, multi-resolution embeddings. Meanwhile, our approach introduces DP-RangerBC, a bias-corrected variant of the Ranger optimizer that mitigates the noise-induced bias in second-order moment estimation, thereby enabling more stable and efficient training under DP constraints. Furthermore, the theoretical analysis of DPRO-GNN, including its correctness and security, is also provided. Extensive experiments on real-world datasets demonstrate that DPRO-GNN achieves superior performance in terms of classification accuracy and convergence speed, compared to state-of-the-art DPGNN methods. The code of DPRO-GNN is available at the following link:<span><span>https://github.com/Silbermondlel/DPRO-GNN</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":51063,\"journal\":{\"name\":\"Information Sciences\",\"volume\":\"723 \",\"pages\":\"Article 122695\"},\"PeriodicalIF\":6.8000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S002002552500828X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S002002552500828X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

图神经网络(gnn)在结构化数据建模方面表现出了卓越的性能,但它们在敏感领域的应用不可避免地引起了隐私问题。现有的差分私有GNN (DPGNN)框架主要依赖于差分私有随机梯度下降(DP-SGD)来执行隐私保证。然而,DP-SGD继承了其固有的局限性,例如训练不稳定性和缓慢收敛,这对于复杂的图学习任务来说尤其成问题。尽管像Ranger这样的高级优化器提供了一个很有前途的替代方案,但由于DP所需的附加噪声,它们与DPGNN框架的幼稚集成引入了偏差,特别是在第二矩估计中。为了解决这一挑战,我们提出了差分私有游器优化图神经网络(DPRO-GNN),以在训练GNN任务时保护用户的敏感数据。为了减轻DP噪声并捕获多尺度结构,DPRO-GNN应用分层池将节点聚集到逐渐粗糙的子图中,从而产生鲁棒的多分辨率嵌入。同时,我们的方法引入了DP- rangerbc,这是Ranger优化器的一种偏差校正变体,可以减轻二阶矩估计中噪声引起的偏差,从而在DP约束下实现更稳定和有效的训练。此外,还对DPRO-GNN的正确性和安全性进行了理论分析。在真实数据集上的大量实验表明,与最先进的DPGNN方法相比,DPRO-GNN在分类精度和收敛速度方面取得了卓越的性能。DPRO-GNN的代码可从以下链接获得:https://github.com/Silbermondlel/DPRO-GNN。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DPRO-GNN: Bridging differential privacy and advanced optimization for privacy-preserving graph learning
Graph Neural Networks (GNNs) have demonstrated exceptional performance in modeling structured data, yet their application in sensitive domains inevitably raises privacy concerns. Existing Differentially Private GNN (DPGNN) frameworks primarily rely on Differentially Private Stochastic Gradient Descent (DP-SGD) to enforce privacy guarantees. However, DP-SGD inherits its inherent limitations, such as training instability and slow convergence, which are particularly problematic for complex graph learning tasks. Although advanced optimizers like Ranger offer a promising alternative, their naive integration into DPGNN frameworks introduces bias, specifically in the second-moment estimation, due to the additive noise required for DP. To address this challenge, we propose the Differentially Private Ranger-Optimized Graph Neural Network (DPRO-GNN) to protect users’ sensitive data when training the GNN tasks. To mitigate DP noise and capture multi-scale structure, DPRO-GNN applies hierarchical pooling to aggregate nodes into progressively coarser subgraphs, yielding robust, multi-resolution embeddings. Meanwhile, our approach introduces DP-RangerBC, a bias-corrected variant of the Ranger optimizer that mitigates the noise-induced bias in second-order moment estimation, thereby enabling more stable and efficient training under DP constraints. Furthermore, the theoretical analysis of DPRO-GNN, including its correctness and security, is also provided. Extensive experiments on real-world datasets demonstrate that DPRO-GNN achieves superior performance in terms of classification accuracy and convergence speed, compared to state-of-the-art DPGNN methods. The code of DPRO-GNN is available at the following link:https://github.com/Silbermondlel/DPRO-GNN.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Sciences
Information Sciences 工程技术-计算机:信息系统
CiteScore
14.00
自引率
17.30%
发文量
1322
审稿时长
10.4 months
期刊介绍: Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions. Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信