{"title":"Rapid federated unlearning with tuning parameters based on fisher information matrix","authors":"Fengda Zhao, Qianyi Xu, Hao Wang, Dingding Guo","doi":"10.1007/s10489-025-06593-0","DOIUrl":null,"url":null,"abstract":"<div><p>Federated learning is a distributed machine learning approach widely applied in privacy-sensitive scenarios. With the emergence of the “right to be forgotten” and the pursuit of data accuracy, there is an increasing demand to quickly and accurately delete targeted information from models while ensuring model performance. Therefore, federated unlearning has been introduced. Although current federated unlearning methods achieve effective unlearning, they often involve lengthy processes and require servers to store extensive historical update information. We propose a novel rapid federated unlearning method named FedTune. This method leverages the Fisher information matrix computed on the client side to assess the correlation between model parameters and the target data, identifying key parameters for adjustment. Based on the importance of these parameters and the frequency of client participation, FedTune determines appropriate adjustment ratios to increase the classification loss on the target data, thereby reducing the model’s accuracy and achieving effective data unlearning. Finally, the server collaborates with the remaining clients for a few rounds of retraining to restore the overall classification performance rapidly. We evaluated the FedTune method on the MNIST, CIFAR-10, and PURCHASE datasets, considering both fixed and dynamic client selection scenarios in privacy-sensitive and contamination settings. Experimental results show that FedTune reduces the time consumed by the unlearning process and server storage costs of the unlearning algorithm while ensuring model classification accuracy and effective unlearning compared to other unlearning algorithms.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 15","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06593-0","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning is a distributed machine learning approach widely applied in privacy-sensitive scenarios. With the emergence of the “right to be forgotten” and the pursuit of data accuracy, there is an increasing demand to quickly and accurately delete targeted information from models while ensuring model performance. Therefore, federated unlearning has been introduced. Although current federated unlearning methods achieve effective unlearning, they often involve lengthy processes and require servers to store extensive historical update information. We propose a novel rapid federated unlearning method named FedTune. This method leverages the Fisher information matrix computed on the client side to assess the correlation between model parameters and the target data, identifying key parameters for adjustment. Based on the importance of these parameters and the frequency of client participation, FedTune determines appropriate adjustment ratios to increase the classification loss on the target data, thereby reducing the model’s accuracy and achieving effective data unlearning. Finally, the server collaborates with the remaining clients for a few rounds of retraining to restore the overall classification performance rapidly. We evaluated the FedTune method on the MNIST, CIFAR-10, and PURCHASE datasets, considering both fixed and dynamic client selection scenarios in privacy-sensitive and contamination settings. Experimental results show that FedTune reduces the time consumed by the unlearning process and server storage costs of the unlearning algorithm while ensuring model classification accuracy and effective unlearning compared to other unlearning algorithms.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.