针对联合学习的振幅对齐个性化和稳健聚合

IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Yongqi Jiang;Siguang Chen;Xiangwen Bao
{"title":"针对联合学习的振幅对齐个性化和稳健聚合","authors":"Yongqi Jiang;Siguang Chen;Xiangwen Bao","doi":"10.1109/TSUSC.2023.3341836","DOIUrl":null,"url":null,"abstract":"In practical applications, federated learning (FL) suffers from slow convergence rate and inferior performance resulting from the statistical heterogeneity of distributed data. Personalized FL (pFL) has been proposed to overcome this problem. However, existing pFL approaches mainly focus on measuring differences between entire model dimensions across clients, ignore the layer-wise differences in convolutional neural networks (CNNs), which may lead to inaccurate personalization. Additionally, two potential threats in FL are that malicious clients may attempt to poison the entire federation by tampering with local labels, and the model information uploaded by clients makes them vulnerable to inference attacks. To tackle these issues, 1) we propose a novel pFL approach in which clients minimize local classification errors and align the local and global prototypes for data from the class that is shared with other clients. This method adopts layer-wise collaborative training to achieve more granular personalization and converts local prototypes to the frequency domain to prevent source data leakage; 2) To prevent the FL model from misclassifying certain test samples as expected by poisoners, we design a robust aggregation method to ensure that benign clients who provide trustworthy model predictions for its local data are weighted far more heavily in the aggregation process than malicious clients. Experiments show that our scheme, especially in the data heterogeneity situation, can produce robust performance and more stable convergence while preserving privacy.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 3","pages":"535-547"},"PeriodicalIF":3.0000,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Amplitude-Aligned Personalization and Robust Aggregation for Federated Learning\",\"authors\":\"Yongqi Jiang;Siguang Chen;Xiangwen Bao\",\"doi\":\"10.1109/TSUSC.2023.3341836\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In practical applications, federated learning (FL) suffers from slow convergence rate and inferior performance resulting from the statistical heterogeneity of distributed data. Personalized FL (pFL) has been proposed to overcome this problem. However, existing pFL approaches mainly focus on measuring differences between entire model dimensions across clients, ignore the layer-wise differences in convolutional neural networks (CNNs), which may lead to inaccurate personalization. Additionally, two potential threats in FL are that malicious clients may attempt to poison the entire federation by tampering with local labels, and the model information uploaded by clients makes them vulnerable to inference attacks. To tackle these issues, 1) we propose a novel pFL approach in which clients minimize local classification errors and align the local and global prototypes for data from the class that is shared with other clients. This method adopts layer-wise collaborative training to achieve more granular personalization and converts local prototypes to the frequency domain to prevent source data leakage; 2) To prevent the FL model from misclassifying certain test samples as expected by poisoners, we design a robust aggregation method to ensure that benign clients who provide trustworthy model predictions for its local data are weighted far more heavily in the aggregation process than malicious clients. Experiments show that our scheme, especially in the data heterogeneity situation, can produce robust performance and more stable convergence while preserving privacy.\",\"PeriodicalId\":13268,\"journal\":{\"name\":\"IEEE Transactions on Sustainable Computing\",\"volume\":\"9 3\",\"pages\":\"535-547\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2023-12-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Sustainable Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10355048/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Sustainable Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10355048/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

在实际应用中,由于分布式数据的统计异质性,联合学习(FL)存在收敛速度慢、性能差的问题。为了克服这一问题,有人提出了个性化联合学习(pFL)。然而,现有的 pFL 方法主要侧重于测量客户端之间整个模型维度的差异,而忽略了卷积神经网络(CNN)的层间差异,这可能会导致个性化不准确。此外,FL 的两个潜在威胁是:恶意客户端可能试图通过篡改本地标签来毒害整个联盟;客户端上传的模型信息容易受到推理攻击。为了解决这些问题,1)我们提出了一种新颖的 pFL 方法,在这种方法中,客户端将局部分类错误最小化,并将与其他客户端共享的类数据的局部原型和全局原型统一起来。这种方法采用分层协同训练来实现更细粒度的个性化,并将局部原型转换为频域,以防止源数据泄漏;2)为了防止 FL 模型误分类中毒者所期望的某些测试样本,我们设计了一种稳健的聚合方法,以确保为其本地数据提供可信模型预测的良性客户端在聚合过程中的权重远远高于恶意客户端。实验表明,我们的方案,尤其是在数据异构的情况下,可以产生稳健的性能和更稳定的收敛,同时保护隐私。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Amplitude-Aligned Personalization and Robust Aggregation for Federated Learning
In practical applications, federated learning (FL) suffers from slow convergence rate and inferior performance resulting from the statistical heterogeneity of distributed data. Personalized FL (pFL) has been proposed to overcome this problem. However, existing pFL approaches mainly focus on measuring differences between entire model dimensions across clients, ignore the layer-wise differences in convolutional neural networks (CNNs), which may lead to inaccurate personalization. Additionally, two potential threats in FL are that malicious clients may attempt to poison the entire federation by tampering with local labels, and the model information uploaded by clients makes them vulnerable to inference attacks. To tackle these issues, 1) we propose a novel pFL approach in which clients minimize local classification errors and align the local and global prototypes for data from the class that is shared with other clients. This method adopts layer-wise collaborative training to achieve more granular personalization and converts local prototypes to the frequency domain to prevent source data leakage; 2) To prevent the FL model from misclassifying certain test samples as expected by poisoners, we design a robust aggregation method to ensure that benign clients who provide trustworthy model predictions for its local data are weighted far more heavily in the aggregation process than malicious clients. Experiments show that our scheme, especially in the data heterogeneity situation, can produce robust performance and more stable convergence while preserving privacy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Sustainable Computing
IEEE Transactions on Sustainable Computing Mathematics-Control and Optimization
CiteScore
7.70
自引率
2.60%
发文量
54
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信