Hua Ma , Qun Li , Yifeng Zheng , Zhi Zhang , Xiaoning Liu , Yansong Gao , Said F. Al-Sarawi , Derek Abbott
{"title":"MUD-PQFed: Towards Malicious User Detection on model corruption in Privacy-preserving Quantized Federated learning","authors":"Hua Ma , Qun Li , Yifeng Zheng , Zhi Zhang , Xiaoning Liu , Yansong Gao , Said F. Al-Sarawi , Derek Abbott","doi":"10.1016/j.cose.2023.103406","DOIUrl":null,"url":null,"abstract":"<div><p><span>The use of cryptographic privacy-preserving techniques in Federated Learning (FL) inadvertently induces a security dilemma because tampered local model parameters are encrypted and thus prevented from auditing. This work firstly demonstrates the triviality of performing model corruption attacks against privacy-preserving FL. We consider the scenario where the model updates are </span><em>quantized</em><span> to reduce the communication overhead<span>, whilst the adversary can simply provide local parameters out of a legitimate range to corrupt the model. We then propose MUD-PQFed, a protocol that can precisely detect malicious attacks<span> and enforce fair penalties on malicious clients. By deleting the contributions from the detected malicious clients, the global model utility is preserved as compared to the baseline global model in the absence of the corruption attack. Extensive experiments on MNIST, CIFAR-10, and CelebA benchmark datasets validate the efficacy in terms of retaining the baseline accuracy and effectiveness in terms of detecting malicious clients in a fine-grained manner.</span></span></span></p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"133 ","pages":"Article 103406"},"PeriodicalIF":4.8000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404823003164","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The use of cryptographic privacy-preserving techniques in Federated Learning (FL) inadvertently induces a security dilemma because tampered local model parameters are encrypted and thus prevented from auditing. This work firstly demonstrates the triviality of performing model corruption attacks against privacy-preserving FL. We consider the scenario where the model updates are quantized to reduce the communication overhead, whilst the adversary can simply provide local parameters out of a legitimate range to corrupt the model. We then propose MUD-PQFed, a protocol that can precisely detect malicious attacks and enforce fair penalties on malicious clients. By deleting the contributions from the detected malicious clients, the global model utility is preserved as compared to the baseline global model in the absence of the corruption attack. Extensive experiments on MNIST, CIFAR-10, and CelebA benchmark datasets validate the efficacy in terms of retaining the baseline accuracy and effectiveness in terms of detecting malicious clients in a fine-grained manner.
期刊介绍:
Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world.
Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.