Muhammad Ameen;Pengfei Wang;Weijian Su;Xiaopeng Wei;Qiang Zhang
{"title":"Speed up Federated Unlearning With Temporary Local Models","authors":"Muhammad Ameen;Pengfei Wang;Weijian Su;Xiaopeng Wei;Qiang Zhang","doi":"10.1109/TSUSC.2025.3549112","DOIUrl":null,"url":null,"abstract":"Federated unlearning (FUL) is a solution aimed at addressing the problem of removing data contributions from trained federated learning (FL) models. Existing FUL methods only focus on iterative unlearning of clients’ contributions and fail to perform unlearning in scenarios where multiple clients request to remove their data at a time. Additionally, FUL still needs to address issues, including convergence speed, maintaining the global model’s performance, and parallel unlearning to expedite the unlearning process. To fill this gap, we introduce Federated Clients Forgetting (FedCF), a fast and accurate FUL method that can eliminate single client contributions similar to existing methods, eliminate multiple clients’ contributions on the global model parallelly, ensure the performance of the unlearned global model, and reduce the unlearning time. The key idea is to construct a temporary model by extracting knowledge from the remaining clients’ updates and adding it to the corresponding parameters of the initial global model and then leverage a temporary model to reconstruct the unlearned global model. Extensive experiments on three benchmark datasets, FedCF demonstrates its efficiency and effectiveness for single client contribution unlearning, achieving an average time efficiency of 8.3x, 6.5x, and 4.1x over existing methods FedRetrain, FedEraser, and FUL with knowledge distillation, respectively. Additionally, FedCF showcases the time efficiency and performance guarantee after unlearning the contributions of multiple clients in parallel.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"10 5","pages":"921-936"},"PeriodicalIF":3.9000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Sustainable Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10916975/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated unlearning (FUL) is a solution aimed at addressing the problem of removing data contributions from trained federated learning (FL) models. Existing FUL methods only focus on iterative unlearning of clients’ contributions and fail to perform unlearning in scenarios where multiple clients request to remove their data at a time. Additionally, FUL still needs to address issues, including convergence speed, maintaining the global model’s performance, and parallel unlearning to expedite the unlearning process. To fill this gap, we introduce Federated Clients Forgetting (FedCF), a fast and accurate FUL method that can eliminate single client contributions similar to existing methods, eliminate multiple clients’ contributions on the global model parallelly, ensure the performance of the unlearned global model, and reduce the unlearning time. The key idea is to construct a temporary model by extracting knowledge from the remaining clients’ updates and adding it to the corresponding parameters of the initial global model and then leverage a temporary model to reconstruct the unlearned global model. Extensive experiments on three benchmark datasets, FedCF demonstrates its efficiency and effectiveness for single client contribution unlearning, achieving an average time efficiency of 8.3x, 6.5x, and 4.1x over existing methods FedRetrain, FedEraser, and FUL with knowledge distillation, respectively. Additionally, FedCF showcases the time efficiency and performance guarantee after unlearning the contributions of multiple clients in parallel.