{"title":"MUFTI:基于多域蒸馏的异构联邦持续学习","authors":"Keke Gai;Zijun Wang;Jing Yu;Liehuang Zhu","doi":"10.1109/TIFS.2025.3542246","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2721-2733"},"PeriodicalIF":8.0000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MUFTI: Multi-Domain Distillation-Based Heterogeneous Federated Continuous Learning\",\"authors\":\"Keke Gai;Zijun Wang;Jing Yu;Liehuang Zhu\",\"doi\":\"10.1109/TIFS.2025.3542246\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"2721-2733\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-02-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10887363/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10887363/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Federated Learning (FL) is an alternative approach that facilitates training machine learning models on distributed users’ data while preserving privacy. However, clients have different local model structures and most local data are non-independent and identically distributed, so that FL encounters heterogeneity and catastrophic forgetting issues when clients continuously accumulate new knowledge. In this work, we propose a scheme called MUFTI (Multi-Domain Distillation-based Heterogeneous Federated ConTInuous Learning). On one hand, we have extended domain adaptation to FL via extracting features to obtain feature representations on unlabeled public datasets for collaborative training, narrowing the distance between feature outputs of different models under the same sample. On the other hand, we propose a combining knowledge distillation method to solve the catastrophic forgetting issue. Within a single task, dual-domain distillation is used to avoid data forgetting between different domains; for cross task learning in task flow, the logits output of the previous model is used as the teacher to avoid forgetting old tasks. The experiment results showed that MUFTI had a better performance in accuracy and robustness comparing to state-of-the-art methods. The evaluation also demonstrated that MUFTI could perform well in handling task increment issues, reducing catastrophic forgetting, and achieving trade-offs between multiple objectives.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features