MUFTI:基于多域蒸馏的异构联邦持续学习

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Keke Gai;Zijun Wang;Jing Yu;Liehuang Zhu
{"title":"MUFTI:基于多域蒸馏的异构联邦持续学习","authors":"Keke Gai;Zijun Wang;Jing Yu;Liehuang Zhu","doi":"10.1109/TIFS.2025.3542246","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2721-2733"},"PeriodicalIF":8.0000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MUFTI: Multi-Domain Distillation-Based Heterogeneous Federated Continuous Learning\",\"authors\":\"Keke Gai;Zijun Wang;Jing Yu;Liehuang Zhu\",\"doi\":\"10.1109/TIFS.2025.3542246\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"2721-2733\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-02-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10887363/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10887363/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)是一种替代方法,它有助于在保护隐私的同时在分布式用户数据上训练机器学习模型。然而,客户端具有不同的局部模型结构,且大多数局部数据是非独立的、同分布的,因此,在客户端不断积累新知识的过程中,FL会遇到异质性和灾难性遗忘问题。在这项工作中,我们提出了一种称为MUFTI(基于多域蒸馏的异构联邦持续学习)的方案。一方面,我们将领域自适应扩展到FL,通过提取特征来获得未标记的公共数据集上的特征表示进行协同训练,缩小了相同样本下不同模型的特征输出之间的距离。另一方面,我们提出了一种结合知识蒸馏的方法来解决灾难性遗忘问题。在单个任务中,使用双域蒸馏来避免不同域之间的数据遗忘;对于任务流中的跨任务学习,使用之前模型的logits输出作为老师,避免忘记旧的任务。实验结果表明,该方法在精度和鲁棒性方面均优于现有方法。评估还表明,MUFTI在处理任务增量问题、减少灾难性遗忘和实现多个目标之间的权衡方面表现良好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MUFTI: Multi-Domain Distillation-Based Heterogeneous Federated Continuous Learning
Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信