Fed-LSAE: Thwarting poisoning attacks against federated cyber threat detection system via Autoencoder-based latent space inspection

IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Tran Duc Luong, Vuong Minh Tien, Nguyen Huu Quyen, Do Thi Thu Hien, Phan The Duy, Van-Hau Pham
{"title":"Fed-LSAE: Thwarting poisoning attacks against federated cyber threat detection system via Autoencoder-based latent space inspection","authors":"Tran Duc Luong,&nbsp;Vuong Minh Tien,&nbsp;Nguyen Huu Quyen,&nbsp;Do Thi Thu Hien,&nbsp;Phan The Duy,&nbsp;Van-Hau Pham","doi":"10.1016/j.jisa.2024.103916","DOIUrl":null,"url":null,"abstract":"<div><div>The rise of security concerns in conventional centralized learning has driven the adoption of federated learning. However, the risks posed by poisoning attacks from internal adversaries against federated systems necessitate robust anti-poisoning frameworks. While previous defensive mechanisms relied on outlier detection, recent approaches focus on latent space representation. In this paper, we investigate a novel robust aggregation method for federated learning, namely Fed-LSAE, which leverages latent space representation via the penultimate layer and Autoencoder to exclude malicious clients from the training process. Specifically, Fed-LSAE measures the similarity level of each local latent space vector to the global one using the Center Kernel Alignment algorithm in every training round. The results of this algorithm are categorized into benign and attack groups, in which only the benign cluster is sent to the central server for federated averaging aggregation. In other words, adversaries would be detected and eliminated from the federated training procedure. The experimental results on the CIC-ToN-IoT and N-BaIoT datasets confirm the feasibility of our defensive mechanism against cutting-edge poisoning attacks for developing a robust federated-based threat detector in the Internet of Things (IoT) context. The evaluation of the federated approach witnesses an upward trend of approximately 98% across all metrics when integrating with our Fed-LSAE defense.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"87 ","pages":"Article 103916"},"PeriodicalIF":3.8000,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212624002187","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The rise of security concerns in conventional centralized learning has driven the adoption of federated learning. However, the risks posed by poisoning attacks from internal adversaries against federated systems necessitate robust anti-poisoning frameworks. While previous defensive mechanisms relied on outlier detection, recent approaches focus on latent space representation. In this paper, we investigate a novel robust aggregation method for federated learning, namely Fed-LSAE, which leverages latent space representation via the penultimate layer and Autoencoder to exclude malicious clients from the training process. Specifically, Fed-LSAE measures the similarity level of each local latent space vector to the global one using the Center Kernel Alignment algorithm in every training round. The results of this algorithm are categorized into benign and attack groups, in which only the benign cluster is sent to the central server for federated averaging aggregation. In other words, adversaries would be detected and eliminated from the federated training procedure. The experimental results on the CIC-ToN-IoT and N-BaIoT datasets confirm the feasibility of our defensive mechanism against cutting-edge poisoning attacks for developing a robust federated-based threat detector in the Internet of Things (IoT) context. The evaluation of the federated approach witnesses an upward trend of approximately 98% across all metrics when integrating with our Fed-LSAE defense.
Fed-LSAE:通过基于自动编码器的潜空间检测挫败针对联合网络威胁检测系统的中毒攻击
传统集中式学习的安全问题日益突出,推动了联合学习的采用。然而,内部对手针对联合系统的中毒攻击所带来的风险,使得强大的反中毒框架成为必要。以前的防御机制依赖于离群点检测,而最近的方法则侧重于潜在空间表示。在本文中,我们研究了一种用于联合学习的新型稳健聚合方法,即 Fed-LSAE,该方法通过倒数第二层和自动编码器利用潜空间表示,将恶意客户端排除在训练过程之外。具体来说,Fed-LSAE 在每一轮训练中使用中心核对齐算法测量每个本地潜在空间向量与全局潜在空间向量的相似度。该算法的结果被分为良性群组和攻击群组,其中只有良性群组才会被发送到中央服务器进行联合平均聚合。换句话说,对手将被检测出来,并从联合训练程序中剔除。在 CIC-ToN-IoT 和 N-BaIoT 数据集上的实验结果证实了我们针对尖端中毒攻击的防御机制在物联网(IoT)背景下开发基于联盟的稳健威胁检测器的可行性。联盟方法的评估结果表明,与我们的 Fed-LSAE 防御机制集成后,所有指标均呈上升趋势,上升率约为 98%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Information Security and Applications
Journal of Information Security and Applications Computer Science-Computer Networks and Communications
CiteScore
10.90
自引率
5.40%
发文量
206
审稿时长
56 days
期刊介绍: Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信