基于差分隐私保护的联邦异常检测

IF 2.3 Q2 COMPUTER SCIENCE, THEORY & METHODS
Array Pub Date : 2025-03-05 DOI:10.1016/j.array.2025.100381
Abdulatif Alabdulatif
{"title":"基于差分隐私保护的联邦异常检测","authors":"Abdulatif Alabdulatif","doi":"10.1016/j.array.2025.100381","DOIUrl":null,"url":null,"abstract":"<div><div>In the rapidly evolving landscape of cybersecurity, privacy-preserving anomaly detection has become crucial, particularly with the rise of sophisticated privacy attacks in distributed learning systems. Traditional centralized anomaly detection systems face challenges related to data privacy and scalability, making federated learning a promising alternative. However, federated learning models remain vulnerable to several privacy attacks, such as inference attacks, model inversion, and gradient leakage. To address these threats, this paper presents GuardianAI, a novel federated anomaly detection framework that incorporates advanced differential privacy techniques, including Gaussian noise addition and secure aggregation protocols, specifically designed to mitigate these attacks. GuardianAI aims to enhance privacy while maintaining high detection accuracy across distributed nodes. The framework effectively prevents attackers from extracting sensitive data from model updates by introducing noise to the gradients and securely aggregating updates across nodes. Experimental results show that GuardianAI achieves a testing accuracy of 99.8 %, outperforming other models like Logistic Regression, SVM, and Random Forest, while robustly defending against common privacy threats. These results demonstrate the practical potential of GuardianAI for secure deployment in various network environments, ensuring privacy without compromising performance.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"26 ","pages":"Article 100381"},"PeriodicalIF":2.3000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GuardianAI: Privacy-preserving federated anomaly detection with differential privacy\",\"authors\":\"Abdulatif Alabdulatif\",\"doi\":\"10.1016/j.array.2025.100381\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In the rapidly evolving landscape of cybersecurity, privacy-preserving anomaly detection has become crucial, particularly with the rise of sophisticated privacy attacks in distributed learning systems. Traditional centralized anomaly detection systems face challenges related to data privacy and scalability, making federated learning a promising alternative. However, federated learning models remain vulnerable to several privacy attacks, such as inference attacks, model inversion, and gradient leakage. To address these threats, this paper presents GuardianAI, a novel federated anomaly detection framework that incorporates advanced differential privacy techniques, including Gaussian noise addition and secure aggregation protocols, specifically designed to mitigate these attacks. GuardianAI aims to enhance privacy while maintaining high detection accuracy across distributed nodes. The framework effectively prevents attackers from extracting sensitive data from model updates by introducing noise to the gradients and securely aggregating updates across nodes. Experimental results show that GuardianAI achieves a testing accuracy of 99.8 %, outperforming other models like Logistic Regression, SVM, and Random Forest, while robustly defending against common privacy threats. These results demonstrate the practical potential of GuardianAI for secure deployment in various network environments, ensuring privacy without compromising performance.</div></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":\"26 \",\"pages\":\"Article 100381\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-03-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2590005625000086\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005625000086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

在快速发展的网络安全领域,保护隐私的异常检测变得至关重要,特别是随着分布式学习系统中复杂隐私攻击的兴起。传统的集中式异常检测系统面临着与数据隐私和可扩展性相关的挑战,这使得联邦学习成为一个有前途的替代方案。然而,联邦学习模型仍然容易受到一些隐私攻击,例如推理攻击、模型反转和梯度泄漏。为了解决这些威胁,本文提出了GuardianAI,这是一种新型的联邦异常检测框架,它结合了先进的差分隐私技术,包括高斯噪声添加和安全聚合协议,专门用于减轻这些攻击。GuardianAI旨在增强隐私,同时保持跨分布式节点的高检测精度。该框架通过在梯度中引入噪声并跨节点安全地聚合更新,有效地防止攻击者从模型更新中提取敏感数据。实验结果表明,GuardianAI的测试准确率达到99.8%,优于Logistic Regression、SVM和Random Forest等其他模型,同时对常见的隐私威胁进行了鲁棒性防御。这些结果证明了GuardianAI在各种网络环境中安全部署的实际潜力,在不影响性能的情况下确保隐私。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
GuardianAI: Privacy-preserving federated anomaly detection with differential privacy
In the rapidly evolving landscape of cybersecurity, privacy-preserving anomaly detection has become crucial, particularly with the rise of sophisticated privacy attacks in distributed learning systems. Traditional centralized anomaly detection systems face challenges related to data privacy and scalability, making federated learning a promising alternative. However, federated learning models remain vulnerable to several privacy attacks, such as inference attacks, model inversion, and gradient leakage. To address these threats, this paper presents GuardianAI, a novel federated anomaly detection framework that incorporates advanced differential privacy techniques, including Gaussian noise addition and secure aggregation protocols, specifically designed to mitigate these attacks. GuardianAI aims to enhance privacy while maintaining high detection accuracy across distributed nodes. The framework effectively prevents attackers from extracting sensitive data from model updates by introducing noise to the gradients and securely aggregating updates across nodes. Experimental results show that GuardianAI achieves a testing accuracy of 99.8 %, outperforming other models like Logistic Regression, SVM, and Random Forest, while robustly defending against common privacy threats. These results demonstrate the practical potential of GuardianAI for secure deployment in various network environments, ensuring privacy without compromising performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Array
Array Computer Science-General Computer Science
CiteScore
4.40
自引率
0.00%
发文量
93
审稿时长
45 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信