M3D-FL:物联网网络中联邦学习的多层恶意模型检测

IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Okba Ben Atia , Mustafa Al Samara , Ismail Bennis , Abdelhafid Abouaissa , Jaafar Gaber , Pascal Lorenz
{"title":"M3D-FL:物联网网络中联邦学习的多层恶意模型检测","authors":"Okba Ben Atia ,&nbsp;Mustafa Al Samara ,&nbsp;Ismail Bennis ,&nbsp;Abdelhafid Abouaissa ,&nbsp;Jaafar Gaber ,&nbsp;Pascal Lorenz","doi":"10.1016/j.cose.2025.104444","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning (FL) is an advanced technique in machine learning that ensures privacy while enabling multiple devices or clients to jointly train a model. Instead of sharing their private data, each device trains a local model on its own data and transmits only the model updates to a central server. However, FL systems face security threats such as poisoning attacks. The maliciously generated data can cause serious consequences on the global model. Also, it can be used to steal sensitive data or cause the model to make incorrect predictions. In this paper, we propose a new approach to enhance the detection of malicious clients against these attacks. Our novel approach is titled M3D-FL for Multi-layer Malicious Model Detection for Federated Learning in IoT networks. The first layer computes the malicious score of participating FL clients using the LOF algorithm, enabling their rejection from the FL aggregation process. Meanwhile, the second layer targets rejected clients and employs MAD outlier detection to permanently eliminate them from the FL process. Simulation results using the CIFAR10, Mnist, and Fashion-Mnist datasets showed that the M3D-FL approach outperforms other studied approaches from the literature regarding several performance metrics like the Accuracy Rate (ACC), Detection Rate (DR), Attack Success Rate (ASR), precision, and the CPU aggregation run-time. The M3D-FL approach is demonstrated to be a more effective and strict detection method of malicious models in FL.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"154 ","pages":"Article 104444"},"PeriodicalIF":4.8000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"M3D-FL: Multi-layer Malicious Model Detection for Federated Learning in IoT networks\",\"authors\":\"Okba Ben Atia ,&nbsp;Mustafa Al Samara ,&nbsp;Ismail Bennis ,&nbsp;Abdelhafid Abouaissa ,&nbsp;Jaafar Gaber ,&nbsp;Pascal Lorenz\",\"doi\":\"10.1016/j.cose.2025.104444\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated learning (FL) is an advanced technique in machine learning that ensures privacy while enabling multiple devices or clients to jointly train a model. Instead of sharing their private data, each device trains a local model on its own data and transmits only the model updates to a central server. However, FL systems face security threats such as poisoning attacks. The maliciously generated data can cause serious consequences on the global model. Also, it can be used to steal sensitive data or cause the model to make incorrect predictions. In this paper, we propose a new approach to enhance the detection of malicious clients against these attacks. Our novel approach is titled M3D-FL for Multi-layer Malicious Model Detection for Federated Learning in IoT networks. The first layer computes the malicious score of participating FL clients using the LOF algorithm, enabling their rejection from the FL aggregation process. Meanwhile, the second layer targets rejected clients and employs MAD outlier detection to permanently eliminate them from the FL process. Simulation results using the CIFAR10, Mnist, and Fashion-Mnist datasets showed that the M3D-FL approach outperforms other studied approaches from the literature regarding several performance metrics like the Accuracy Rate (ACC), Detection Rate (DR), Attack Success Rate (ASR), precision, and the CPU aggregation run-time. The M3D-FL approach is demonstrated to be a more effective and strict detection method of malicious models in FL.</div></div>\",\"PeriodicalId\":51004,\"journal\":{\"name\":\"Computers & Security\",\"volume\":\"154 \",\"pages\":\"Article 104444\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2025-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167404825001336\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404825001336","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)是机器学习中的一种先进技术,它在确保隐私的同时允许多个设备或客户端联合训练模型。每个设备不共享它们的私人数据,而是根据自己的数据训练本地模型,并只将模型更新信息传输到中央服务器。然而,FL系统面临着中毒攻击等安全威胁。恶意生成的数据可能对全局模型造成严重后果。此外,它还可以用来窃取敏感数据或导致模型做出错误的预测。在本文中,我们提出了一种新的方法来增强对这些攻击的恶意客户端的检测。我们的新方法名为M3D-FL,用于物联网网络中联邦学习的多层恶意模型检测。第一层使用LOF算法计算参与的FL客户端的恶意得分,使其能够从FL聚合过程中被拒绝。同时,第二层针对被拒绝的客户端,并采用MAD异常值检测将其永久地从FL进程中消除。使用CIFAR10、Mnist和Fashion-Mnist数据集的仿真结果表明,M3D-FL方法在准确率(ACC)、检测率(DR)、攻击成功率(ASR)、精度和CPU聚合运行时间等几个性能指标上优于文献中其他研究方法。实验证明,M3D-FL方法是一种更有效、更严格的FL恶意模型检测方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
M3D-FL: Multi-layer Malicious Model Detection for Federated Learning in IoT networks
Federated learning (FL) is an advanced technique in machine learning that ensures privacy while enabling multiple devices or clients to jointly train a model. Instead of sharing their private data, each device trains a local model on its own data and transmits only the model updates to a central server. However, FL systems face security threats such as poisoning attacks. The maliciously generated data can cause serious consequences on the global model. Also, it can be used to steal sensitive data or cause the model to make incorrect predictions. In this paper, we propose a new approach to enhance the detection of malicious clients against these attacks. Our novel approach is titled M3D-FL for Multi-layer Malicious Model Detection for Federated Learning in IoT networks. The first layer computes the malicious score of participating FL clients using the LOF algorithm, enabling their rejection from the FL aggregation process. Meanwhile, the second layer targets rejected clients and employs MAD outlier detection to permanently eliminate them from the FL process. Simulation results using the CIFAR10, Mnist, and Fashion-Mnist datasets showed that the M3D-FL approach outperforms other studied approaches from the literature regarding several performance metrics like the Accuracy Rate (ACC), Detection Rate (DR), Attack Success Rate (ASR), precision, and the CPU aggregation run-time. The M3D-FL approach is demonstrated to be a more effective and strict detection method of malicious models in FL.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Security
Computers & Security 工程技术-计算机:信息系统
CiteScore
12.40
自引率
7.10%
发文量
365
审稿时长
10.7 months
期刊介绍: Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world. Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信