Federated Learning in Adversarial Environments: Testbed Design and Poisoning Resilience in Cybersecurity

Hao Jian Huang, Bekzod Iskandarov, Mizanur Rahman, Hakan T. Otal, M. Abdullah Canbaz
{"title":"Federated Learning in Adversarial Environments: Testbed Design and Poisoning Resilience in Cybersecurity","authors":"Hao Jian Huang, Bekzod Iskandarov, Mizanur Rahman, Hakan T. Otal, M. Abdullah Canbaz","doi":"arxiv-2409.09794","DOIUrl":null,"url":null,"abstract":"This paper presents the design and implementation of a Federated Learning\n(FL) testbed, focusing on its application in cybersecurity and evaluating its\nresilience against poisoning attacks. Federated Learning allows multiple\nclients to collaboratively train a global model while keeping their data\ndecentralized, addressing critical needs for data privacy and security,\nparticularly in sensitive fields like cybersecurity. Our testbed, built using\nthe Flower framework, facilitates experimentation with various FL frameworks,\nassessing their performance, scalability, and ease of integration. Through a\ncase study on federated intrusion detection systems, we demonstrate the\ntestbed's capabilities in detecting anomalies and securing critical\ninfrastructure without exposing sensitive network data. Comprehensive poisoning\ntests, targeting both model and data integrity, evaluate the system's\nrobustness under adversarial conditions. Our results show that while federated\nlearning enhances data privacy and distributed learning, it remains vulnerable\nto poisoning attacks, which must be mitigated to ensure its reliability in\nreal-world applications.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09794","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents the design and implementation of a Federated Learning (FL) testbed, focusing on its application in cybersecurity and evaluating its resilience against poisoning attacks. Federated Learning allows multiple clients to collaboratively train a global model while keeping their data decentralized, addressing critical needs for data privacy and security, particularly in sensitive fields like cybersecurity. Our testbed, built using the Flower framework, facilitates experimentation with various FL frameworks, assessing their performance, scalability, and ease of integration. Through a case study on federated intrusion detection systems, we demonstrate the testbed's capabilities in detecting anomalies and securing critical infrastructure without exposing sensitive network data. Comprehensive poisoning tests, targeting both model and data integrity, evaluate the system's robustness under adversarial conditions. Our results show that while federated learning enhances data privacy and distributed learning, it remains vulnerable to poisoning attacks, which must be mitigated to ensure its reliability in real-world applications.
对抗环境中的联合学习:网络安全中的试验台设计和抗中毒能力
本文介绍了联合学习(FL)测试平台的设计与实施,重点关注其在网络安全中的应用,并评估其抵御中毒攻击的能力。Federated Learning 允许多个客户端协作训练一个全局模型,同时保持其数据的集中性,从而满足数据隐私和安全的关键需求,尤其是在网络安全等敏感领域。我们的测试平台采用 Flower 框架构建,便于对各种 FL 框架进行实验,评估它们的性能、可扩展性和易集成性。通过对联合入侵检测系统的案例研究,我们展示了测试平台在检测异常和保护关键基础设施安全方面的能力,而不会暴露敏感的网络数据。针对模型和数据完整性的全面中毒测试评估了系统在对抗条件下的稳健性。我们的研究结果表明,虽然联合学习增强了数据隐私和分布式学习,但它仍然容易受到中毒攻击,必须减轻这种攻击才能确保其在真实世界应用中的可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信