{"title":"Federated Unlearning With Fast Recovery","authors":"Changjun Zhou;Chenglin Pan;Minglu Li;Pengfei Wang","doi":"10.1109/TMC.2025.3563265","DOIUrl":null,"url":null,"abstract":"Recent federated unlearning studies mainly focus on removing the target client's contributions from the global model permanently. However, the requirement for accommodating temporary user exits or additions in federated learning has been neglected. In this paper, we propose a novel recoverable federated unlearning scheme, named RFUL, which allows users to remove or add their local model to the global one at any time easily and quickly. It mainly consists of two main components, i.e., knowledge unlearning and knowledge recovery. In knowledge unlearning, the target contributions can be eliminated by training with mislabeled target data, while preserving the non-target contributions through distillation using the original model. In knowledge recovery, the forgotten contributions can be restored by training the target data using classification loss, while the non-target contributions are maintained through feature distillation and parameter freezing on the classifier. Both knowledge unlearning and recovery processes only require the participation of target data, guaranteeing the algorithm's practicality in federated learning systems. Extensive experiments demonstrate the significant efficacy of RFUL. For knowledge unlearning, RFUL matches state-of-the-art methods using only target data, achieving a runtime speedup of 3.3 to 8.7 times compared to retraining across various datasets. For knowledge recovery, RFUL exceeds state-of-the-art incremental learning methods by 5.02% to 29.97% in accuracy and achieves a runtime speedup of 1.8 to 4.4 times compared to retraining on different datasets.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"9709-9725"},"PeriodicalIF":9.2000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10972332/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Recent federated unlearning studies mainly focus on removing the target client's contributions from the global model permanently. However, the requirement for accommodating temporary user exits or additions in federated learning has been neglected. In this paper, we propose a novel recoverable federated unlearning scheme, named RFUL, which allows users to remove or add their local model to the global one at any time easily and quickly. It mainly consists of two main components, i.e., knowledge unlearning and knowledge recovery. In knowledge unlearning, the target contributions can be eliminated by training with mislabeled target data, while preserving the non-target contributions through distillation using the original model. In knowledge recovery, the forgotten contributions can be restored by training the target data using classification loss, while the non-target contributions are maintained through feature distillation and parameter freezing on the classifier. Both knowledge unlearning and recovery processes only require the participation of target data, guaranteeing the algorithm's practicality in federated learning systems. Extensive experiments demonstrate the significant efficacy of RFUL. For knowledge unlearning, RFUL matches state-of-the-art methods using only target data, achieving a runtime speedup of 3.3 to 8.7 times compared to retraining across various datasets. For knowledge recovery, RFUL exceeds state-of-the-art incremental learning methods by 5.02% to 29.97% in accuracy and achieves a runtime speedup of 1.8 to 4.4 times compared to retraining on different datasets.
期刊介绍:
IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.