Federated Unlearning With Fast Recovery

IF 9.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Changjun Zhou;Chenglin Pan;Minglu Li;Pengfei Wang
{"title":"Federated Unlearning With Fast Recovery","authors":"Changjun Zhou;Chenglin Pan;Minglu Li;Pengfei Wang","doi":"10.1109/TMC.2025.3563265","DOIUrl":null,"url":null,"abstract":"Recent federated unlearning studies mainly focus on removing the target client's contributions from the global model permanently. However, the requirement for accommodating temporary user exits or additions in federated learning has been neglected. In this paper, we propose a novel recoverable federated unlearning scheme, named RFUL, which allows users to remove or add their local model to the global one at any time easily and quickly. It mainly consists of two main components, i.e., knowledge unlearning and knowledge recovery. In knowledge unlearning, the target contributions can be eliminated by training with mislabeled target data, while preserving the non-target contributions through distillation using the original model. In knowledge recovery, the forgotten contributions can be restored by training the target data using classification loss, while the non-target contributions are maintained through feature distillation and parameter freezing on the classifier. Both knowledge unlearning and recovery processes only require the participation of target data, guaranteeing the algorithm's practicality in federated learning systems. Extensive experiments demonstrate the significant efficacy of RFUL. For knowledge unlearning, RFUL matches state-of-the-art methods using only target data, achieving a runtime speedup of 3.3 to 8.7 times compared to retraining across various datasets. For knowledge recovery, RFUL exceeds state-of-the-art incremental learning methods by 5.02% to 29.97% in accuracy and achieves a runtime speedup of 1.8 to 4.4 times compared to retraining on different datasets.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"9709-9725"},"PeriodicalIF":9.2000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10972332/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Recent federated unlearning studies mainly focus on removing the target client's contributions from the global model permanently. However, the requirement for accommodating temporary user exits or additions in federated learning has been neglected. In this paper, we propose a novel recoverable federated unlearning scheme, named RFUL, which allows users to remove or add their local model to the global one at any time easily and quickly. It mainly consists of two main components, i.e., knowledge unlearning and knowledge recovery. In knowledge unlearning, the target contributions can be eliminated by training with mislabeled target data, while preserving the non-target contributions through distillation using the original model. In knowledge recovery, the forgotten contributions can be restored by training the target data using classification loss, while the non-target contributions are maintained through feature distillation and parameter freezing on the classifier. Both knowledge unlearning and recovery processes only require the participation of target data, guaranteeing the algorithm's practicality in federated learning systems. Extensive experiments demonstrate the significant efficacy of RFUL. For knowledge unlearning, RFUL matches state-of-the-art methods using only target data, achieving a runtime speedup of 3.3 to 8.7 times compared to retraining across various datasets. For knowledge recovery, RFUL exceeds state-of-the-art incremental learning methods by 5.02% to 29.97% in accuracy and achieves a runtime speedup of 1.8 to 4.4 times compared to retraining on different datasets.
联合学习与快速恢复
最近的联合学习研究主要集中在从全局模型中永久地删除目标客户的贡献。然而,在联邦学习中容纳临时用户退出或添加的需求被忽略了。在本文中,我们提出了一种新的可恢复的联合学习方案RFUL,该方案允许用户随时方便、快速地将他们的局部模型删除或添加到全局模型中。它主要包括两个主要部分,即知识遗忘和知识恢复。在知识学习中,可以通过对错误标记的目标数据进行训练来消除目标贡献,同时使用原始模型进行蒸馏来保留非目标贡献。在知识恢复中,通过对目标数据进行分类损失训练来恢复被遗忘的贡献,而通过对分类器进行特征蒸馏和参数冻结来保持非目标贡献。知识遗忘和恢复过程都只需要目标数据的参与,保证了算法在联邦学习系统中的实用性。大量实验证明了RFUL的显著疗效。对于知识遗忘,RFUL与仅使用目标数据的最先进的方法相匹配,与跨各种数据集的再训练相比,实现了3.3到8.7倍的运行时加速。对于知识恢复,RFUL的准确率比目前最先进的增量学习方法高出5.02%至29.97%,与在不同数据集上进行再训练相比,RFUL的运行速度提高了1.8至4.4倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Mobile Computing
IEEE Transactions on Mobile Computing 工程技术-电信学
CiteScore
12.90
自引率
2.50%
发文量
403
审稿时长
6.6 months
期刊介绍: IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信