Chong Zhang , Xiaojun Zhang , Xingchun Yang , Bingyun Liu , Yuan Zhang , Rang Zhou
{"title":"中毒攻击基于轻量级同态加密的弹性隐私保护联邦学习方案","authors":"Chong Zhang , Xiaojun Zhang , Xingchun Yang , Bingyun Liu , Yuan Zhang , Rang Zhou","doi":"10.1016/j.inffus.2025.103131","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning ensures that multiple participants train the same model without leaking the local raw data. Each participant uploads the local gradient model instead of the original data, however, the uploaded local gradient model may contain certain sensitive information, which can be exploited by an adversary to break privacy protection. Meanwhile, some adversaries can make the model training results contrary to the expected results by tampering with the uploaded local gradient model or mixing malicious data into the local dataset, thereby inducing the model to produce wrong results for specific data. To this end, we devise a privacy-preserving federated learning scheme based on lightweight homomorphic encryption, which simultaneously reduces the weight of malicious data in gradient aggregation and supports anomaly detection of data, achieves the effect of resistance to poisoning attacks. Through theoretical analysis and experimental simulation, the proposed scheme has lightweight computation advantages compared with existing federated learning schemes.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"121 ","pages":"Article 103131"},"PeriodicalIF":14.7000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Poisoning attacks resilient privacy-preserving federated learning scheme based on lightweight homomorphic encryption\",\"authors\":\"Chong Zhang , Xiaojun Zhang , Xingchun Yang , Bingyun Liu , Yuan Zhang , Rang Zhou\",\"doi\":\"10.1016/j.inffus.2025.103131\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated learning ensures that multiple participants train the same model without leaking the local raw data. Each participant uploads the local gradient model instead of the original data, however, the uploaded local gradient model may contain certain sensitive information, which can be exploited by an adversary to break privacy protection. Meanwhile, some adversaries can make the model training results contrary to the expected results by tampering with the uploaded local gradient model or mixing malicious data into the local dataset, thereby inducing the model to produce wrong results for specific data. To this end, we devise a privacy-preserving federated learning scheme based on lightweight homomorphic encryption, which simultaneously reduces the weight of malicious data in gradient aggregation and supports anomaly detection of data, achieves the effect of resistance to poisoning attacks. Through theoretical analysis and experimental simulation, the proposed scheme has lightweight computation advantages compared with existing federated learning schemes.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"121 \",\"pages\":\"Article 103131\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2025-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525002040\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525002040","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Poisoning attacks resilient privacy-preserving federated learning scheme based on lightweight homomorphic encryption
Federated learning ensures that multiple participants train the same model without leaking the local raw data. Each participant uploads the local gradient model instead of the original data, however, the uploaded local gradient model may contain certain sensitive information, which can be exploited by an adversary to break privacy protection. Meanwhile, some adversaries can make the model training results contrary to the expected results by tampering with the uploaded local gradient model or mixing malicious data into the local dataset, thereby inducing the model to produce wrong results for specific data. To this end, we devise a privacy-preserving federated learning scheme based on lightweight homomorphic encryption, which simultaneously reduces the weight of malicious data in gradient aggregation and supports anomaly detection of data, achieves the effect of resistance to poisoning attacks. Through theoretical analysis and experimental simulation, the proposed scheme has lightweight computation advantages compared with existing federated learning schemes.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.