{"title":"Amplitude-Aligned Personalization and Robust Aggregation for Federated Learning","authors":"Yongqi Jiang;Siguang Chen;Xiangwen Bao","doi":"10.1109/TSUSC.2023.3341836","DOIUrl":null,"url":null,"abstract":"In practical applications, federated learning (FL) suffers from slow convergence rate and inferior performance resulting from the statistical heterogeneity of distributed data. Personalized FL (pFL) has been proposed to overcome this problem. However, existing pFL approaches mainly focus on measuring differences between entire model dimensions across clients, ignore the layer-wise differences in convolutional neural networks (CNNs), which may lead to inaccurate personalization. Additionally, two potential threats in FL are that malicious clients may attempt to poison the entire federation by tampering with local labels, and the model information uploaded by clients makes them vulnerable to inference attacks. To tackle these issues, 1) we propose a novel pFL approach in which clients minimize local classification errors and align the local and global prototypes for data from the class that is shared with other clients. This method adopts layer-wise collaborative training to achieve more granular personalization and converts local prototypes to the frequency domain to prevent source data leakage; 2) To prevent the FL model from misclassifying certain test samples as expected by poisoners, we design a robust aggregation method to ensure that benign clients who provide trustworthy model predictions for its local data are weighted far more heavily in the aggregation process than malicious clients. Experiments show that our scheme, especially in the data heterogeneity situation, can produce robust performance and more stable convergence while preserving privacy.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 3","pages":"535-547"},"PeriodicalIF":3.0000,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Sustainable Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10355048/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
In practical applications, federated learning (FL) suffers from slow convergence rate and inferior performance resulting from the statistical heterogeneity of distributed data. Personalized FL (pFL) has been proposed to overcome this problem. However, existing pFL approaches mainly focus on measuring differences between entire model dimensions across clients, ignore the layer-wise differences in convolutional neural networks (CNNs), which may lead to inaccurate personalization. Additionally, two potential threats in FL are that malicious clients may attempt to poison the entire federation by tampering with local labels, and the model information uploaded by clients makes them vulnerable to inference attacks. To tackle these issues, 1) we propose a novel pFL approach in which clients minimize local classification errors and align the local and global prototypes for data from the class that is shared with other clients. This method adopts layer-wise collaborative training to achieve more granular personalization and converts local prototypes to the frequency domain to prevent source data leakage; 2) To prevent the FL model from misclassifying certain test samples as expected by poisoners, we design a robust aggregation method to ensure that benign clients who provide trustworthy model predictions for its local data are weighted far more heavily in the aggregation process than malicious clients. Experiments show that our scheme, especially in the data heterogeneity situation, can produce robust performance and more stable convergence while preserving privacy.