基于fisher信息矩阵的参数调优快速联合学习

IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Fengda Zhao, Qianyi Xu, Hao Wang, Dingding Guo
{"title":"基于fisher信息矩阵的参数调优快速联合学习","authors":"Fengda Zhao,&nbsp;Qianyi Xu,&nbsp;Hao Wang,&nbsp;Dingding Guo","doi":"10.1007/s10489-025-06593-0","DOIUrl":null,"url":null,"abstract":"<div><p>Federated learning is a distributed machine learning approach widely applied in privacy-sensitive scenarios. With the emergence of the “right to be forgotten” and the pursuit of data accuracy, there is an increasing demand to quickly and accurately delete targeted information from models while ensuring model performance. Therefore, federated unlearning has been introduced. Although current federated unlearning methods achieve effective unlearning, they often involve lengthy processes and require servers to store extensive historical update information. We propose a novel rapid federated unlearning method named FedTune. This method leverages the Fisher information matrix computed on the client side to assess the correlation between model parameters and the target data, identifying key parameters for adjustment. Based on the importance of these parameters and the frequency of client participation, FedTune determines appropriate adjustment ratios to increase the classification loss on the target data, thereby reducing the model’s accuracy and achieving effective data unlearning. Finally, the server collaborates with the remaining clients for a few rounds of retraining to restore the overall classification performance rapidly. We evaluated the FedTune method on the MNIST, CIFAR-10, and PURCHASE datasets, considering both fixed and dynamic client selection scenarios in privacy-sensitive and contamination settings. Experimental results show that FedTune reduces the time consumed by the unlearning process and server storage costs of the unlearning algorithm while ensuring model classification accuracy and effective unlearning compared to other unlearning algorithms.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 15","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Rapid federated unlearning with tuning parameters based on fisher information matrix\",\"authors\":\"Fengda Zhao,&nbsp;Qianyi Xu,&nbsp;Hao Wang,&nbsp;Dingding Guo\",\"doi\":\"10.1007/s10489-025-06593-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Federated learning is a distributed machine learning approach widely applied in privacy-sensitive scenarios. With the emergence of the “right to be forgotten” and the pursuit of data accuracy, there is an increasing demand to quickly and accurately delete targeted information from models while ensuring model performance. Therefore, federated unlearning has been introduced. Although current federated unlearning methods achieve effective unlearning, they often involve lengthy processes and require servers to store extensive historical update information. We propose a novel rapid federated unlearning method named FedTune. This method leverages the Fisher information matrix computed on the client side to assess the correlation between model parameters and the target data, identifying key parameters for adjustment. Based on the importance of these parameters and the frequency of client participation, FedTune determines appropriate adjustment ratios to increase the classification loss on the target data, thereby reducing the model’s accuracy and achieving effective data unlearning. Finally, the server collaborates with the remaining clients for a few rounds of retraining to restore the overall classification performance rapidly. We evaluated the FedTune method on the MNIST, CIFAR-10, and PURCHASE datasets, considering both fixed and dynamic client selection scenarios in privacy-sensitive and contamination settings. Experimental results show that FedTune reduces the time consumed by the unlearning process and server storage costs of the unlearning algorithm while ensuring model classification accuracy and effective unlearning compared to other unlearning algorithms.</p></div>\",\"PeriodicalId\":8041,\"journal\":{\"name\":\"Applied Intelligence\",\"volume\":\"55 15\",\"pages\":\"\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10489-025-06593-0\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06593-0","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习是一种广泛应用于隐私敏感场景的分布式机器学习方法。随着“被遗忘权”的出现和对数据准确性的追求,在保证模型性能的同时,快速准确地从模型中删除目标信息的需求越来越大。因此,引入了联合学习。虽然目前的联邦遗忘方法可以实现有效的遗忘,但它们通常涉及冗长的过程,并且需要服务器存储大量的历史更新信息。提出了一种新的快速联邦学习方法FedTune。该方法利用在客户端计算的Fisher信息矩阵来评估模型参数与目标数据之间的相关性,确定需要调整的关键参数。FedTune根据这些参数的重要性和客户参与的频率,确定适当的调整比例来增加目标数据上的分类损失,从而降低模型的准确率,实现有效的数据去学习。最后,服务器与其余客户端协作进行几轮再训练,以快速恢复整体分类性能。我们在MNIST、CIFAR-10和PURCHASE数据集上评估了FedTune方法,同时考虑了隐私敏感和污染设置下的固定和动态客户端选择场景。实验结果表明,与其他学习算法相比,FedTune在保证模型分类精度和有效学习的同时,减少了学习过程所消耗的时间和学习算法的服务器存储成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Rapid federated unlearning with tuning parameters based on fisher information matrix

Rapid federated unlearning with tuning parameters based on fisher information matrix

Federated learning is a distributed machine learning approach widely applied in privacy-sensitive scenarios. With the emergence of the “right to be forgotten” and the pursuit of data accuracy, there is an increasing demand to quickly and accurately delete targeted information from models while ensuring model performance. Therefore, federated unlearning has been introduced. Although current federated unlearning methods achieve effective unlearning, they often involve lengthy processes and require servers to store extensive historical update information. We propose a novel rapid federated unlearning method named FedTune. This method leverages the Fisher information matrix computed on the client side to assess the correlation between model parameters and the target data, identifying key parameters for adjustment. Based on the importance of these parameters and the frequency of client participation, FedTune determines appropriate adjustment ratios to increase the classification loss on the target data, thereby reducing the model’s accuracy and achieving effective data unlearning. Finally, the server collaborates with the remaining clients for a few rounds of retraining to restore the overall classification performance rapidly. We evaluated the FedTune method on the MNIST, CIFAR-10, and PURCHASE datasets, considering both fixed and dynamic client selection scenarios in privacy-sensitive and contamination settings. Experimental results show that FedTune reduces the time consumed by the unlearning process and server storage costs of the unlearning algorithm while ensuring model classification accuracy and effective unlearning compared to other unlearning algorithms.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Applied Intelligence
Applied Intelligence 工程技术-计算机:人工智能
CiteScore
6.60
自引率
20.80%
发文量
1361
审稿时长
5.9 months
期刊介绍: With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信