FedTC: A Personalized Federated Learning Method with Two Classifiers

Yang Liu, Jiabo Wang, Qinbo Liu, Mehdi Gheisari, Wanyin Xu, Zoe L. Jiang, Jiajia Zhang
{"title":"FedTC: A Personalized Federated Learning Method with Two Classifiers","authors":"Yang Liu, Jiabo Wang, Qinbo Liu, Mehdi Gheisari, Wanyin Xu, Zoe L. Jiang, Jiajia Zhang","doi":"10.32604/cmc.2023.039452","DOIUrl":null,"url":null,"abstract":"Centralized training of deep learning models poses privacy risks that hinder their deployment. Federated learning (FL) has emerged as a solution to address these risks, allowing multiple clients to train deep learning models collaboratively without sharing raw data. However, FL is vulnerable to the impact of heterogeneous distributed data, which weakens convergence stability and suboptimal performance of the trained model on local data. This is due to the discarding of the old local model at each round of training, which results in the loss of personalized information in the model critical for maintaining model accuracy and ensuring robustness. In this paper, we propose FedTC, a personalized federated learning method with two classifiers that can retain personalized information in the local model and improve the model’s performance on local data. FedTC divides the model into two parts, namely, the extractor and the classifier, where the classifier is the last layer of the model, and the extractor consists of other layers. The classifier in the local model is always retained to ensure that the personalized information is not lost. After receiving the global model, the local extractor is overwritten by the global model’s extractor, and the classifier of the global model serves as an additional classifier of the local model to guide local training. The FedTC introduces a two-classifier training strategy to coordinate the two classifiers for local model updates. Experimental results on Cifar10 and Cifar100 datasets demonstrate that FedTC performs better on heterogeneous data than current studies, such as FedAvg, FedPer, and local training, achieving a maximum improvement of 27.95% in model classification test accuracy compared to FedAvg.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"136 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers, materials & continua","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32604/cmc.2023.039452","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Centralized training of deep learning models poses privacy risks that hinder their deployment. Federated learning (FL) has emerged as a solution to address these risks, allowing multiple clients to train deep learning models collaboratively without sharing raw data. However, FL is vulnerable to the impact of heterogeneous distributed data, which weakens convergence stability and suboptimal performance of the trained model on local data. This is due to the discarding of the old local model at each round of training, which results in the loss of personalized information in the model critical for maintaining model accuracy and ensuring robustness. In this paper, we propose FedTC, a personalized federated learning method with two classifiers that can retain personalized information in the local model and improve the model’s performance on local data. FedTC divides the model into two parts, namely, the extractor and the classifier, where the classifier is the last layer of the model, and the extractor consists of other layers. The classifier in the local model is always retained to ensure that the personalized information is not lost. After receiving the global model, the local extractor is overwritten by the global model’s extractor, and the classifier of the global model serves as an additional classifier of the local model to guide local training. The FedTC introduces a two-classifier training strategy to coordinate the two classifiers for local model updates. Experimental results on Cifar10 and Cifar100 datasets demonstrate that FedTC performs better on heterogeneous data than current studies, such as FedAvg, FedPer, and local training, achieving a maximum improvement of 27.95% in model classification test accuracy compared to FedAvg.
基于两个分类器的个性化联邦学习方法
集中训练深度学习模型会带来隐私风险,阻碍其部署。联邦学习(FL)已经成为解决这些风险的解决方案,它允许多个客户在不共享原始数据的情况下协作训练深度学习模型。然而,FL容易受到异构分布数据的影响,从而削弱了训练模型在局部数据上的收敛稳定性和次优性能。这是由于在每一轮训练中丢弃旧的局部模型,导致模型中对保持模型准确性和确保鲁棒性至关重要的个性化信息丢失。在本文中,我们提出了一种带有两个分类器的个性化联邦学习方法FedTC,它可以在局部模型中保留个性化信息并提高模型在局部数据上的性能。FedTC将模型分为提取器和分类器两部分,其中分类器是模型的最后一层,提取器由其他层组成。始终保留局部模型中的分类器,以确保个性化信息不丢失。接收到全局模型后,局部提取器被全局模型的提取器覆盖,全局模型的分类器作为局部模型的附加分类器,指导局部训练。FedTC引入了一种双分类器训练策略来协调两个分类器进行局部模型更新。在Cifar10和Cifar100数据集上的实验结果表明,FedTC在异构数据上的表现优于目前的研究,如fedag、FedPer和局部训练,模型分类测试准确率比fedag提高了27.95%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信