{"title":"为可解释的联合学习实现自适应隐私保护","authors":"Zhe Li;Honglong Chen;Zhichen Ni;Yudong Gao;Wei Lou","doi":"10.1109/TMC.2024.3443862","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is an effective privacy-preserving mechanism that collaboratively trains the global model in a distributed manner by solely sharing model parameters rather than data from local clients, like mobile devices, to a central server. Nevertheless, recent studies have illustrated that FL still suffers from gradient leakage as adversaries try to recover training data by analyzing shared parameters from local clients. To address this issue, differential privacy (DP) is adopted to add noise to the parameters of local models before aggregation occurs on the server. It, however, results in the poor performance of gradient-based interpretability, since some important weights capturing the salient region in feature maps will be perturbed. To overcome this problem, we propose a simple yet effective adaptive gradient protection (AGP) mechanism that selectively adds noisy perturbations to certain channels of each client model that have a relatively small impact on interpretability. We also offer a theoretical analysis of the convergence of FL using our method. The evaluation results on both IID and Non-IID data demonstrate that the proposed AGP can achieve a good trade-off between privacy protection and interpretability in FL. Furthermore, we verify the robustness of the proposed method against two different gradient leakage attacks.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"23 12","pages":"14471-14483"},"PeriodicalIF":7.7000,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Adaptive Privacy Protection for Interpretable Federated Learning\",\"authors\":\"Zhe Li;Honglong Chen;Zhichen Ni;Yudong Gao;Wei Lou\",\"doi\":\"10.1109/TMC.2024.3443862\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) is an effective privacy-preserving mechanism that collaboratively trains the global model in a distributed manner by solely sharing model parameters rather than data from local clients, like mobile devices, to a central server. Nevertheless, recent studies have illustrated that FL still suffers from gradient leakage as adversaries try to recover training data by analyzing shared parameters from local clients. To address this issue, differential privacy (DP) is adopted to add noise to the parameters of local models before aggregation occurs on the server. It, however, results in the poor performance of gradient-based interpretability, since some important weights capturing the salient region in feature maps will be perturbed. To overcome this problem, we propose a simple yet effective adaptive gradient protection (AGP) mechanism that selectively adds noisy perturbations to certain channels of each client model that have a relatively small impact on interpretability. We also offer a theoretical analysis of the convergence of FL using our method. The evaluation results on both IID and Non-IID data demonstrate that the proposed AGP can achieve a good trade-off between privacy protection and interpretability in FL. Furthermore, we verify the robustness of the proposed method against two different gradient leakage attacks.\",\"PeriodicalId\":50389,\"journal\":{\"name\":\"IEEE Transactions on Mobile Computing\",\"volume\":\"23 12\",\"pages\":\"14471-14483\"},\"PeriodicalIF\":7.7000,\"publicationDate\":\"2024-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Mobile Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10637763/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10637763/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Towards Adaptive Privacy Protection for Interpretable Federated Learning
Federated learning (FL) is an effective privacy-preserving mechanism that collaboratively trains the global model in a distributed manner by solely sharing model parameters rather than data from local clients, like mobile devices, to a central server. Nevertheless, recent studies have illustrated that FL still suffers from gradient leakage as adversaries try to recover training data by analyzing shared parameters from local clients. To address this issue, differential privacy (DP) is adopted to add noise to the parameters of local models before aggregation occurs on the server. It, however, results in the poor performance of gradient-based interpretability, since some important weights capturing the salient region in feature maps will be perturbed. To overcome this problem, we propose a simple yet effective adaptive gradient protection (AGP) mechanism that selectively adds noisy perturbations to certain channels of each client model that have a relatively small impact on interpretability. We also offer a theoretical analysis of the convergence of FL using our method. The evaluation results on both IID and Non-IID data demonstrate that the proposed AGP can achieve a good trade-off between privacy protection and interpretability in FL. Furthermore, we verify the robustness of the proposed method against two different gradient leakage attacks.
期刊介绍:
IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.