{"title":"Embracing knowledge integration from the vision-language model for federated domain generalization on multi-source fused data","authors":"Zhenyu Liu , Heye Zhang , Yiwen Wang , Zhifan Gao","doi":"10.1016/j.inffus.2025.103714","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Domain Generalization (FedDG) has attracted attention for its potential to enable privacy-preserving fusion of multi-source data. It aims to develop a global model in a distributed manner that generalizes to unseen clients. However, it faces the challenge of the tradeoff between inter-client and intra-client domain shifts. Knowledge distillation from the vision-language model may address this challenge by transferring its zero-shot generalization ability to client models. However, it may suffer from distribution discrepancies between the pretraining data of the vision-language model and the downstream data. Although pre-distillation fine-tuning may alleviate this issue in centralized settings, it may not be compatible with FedDG. In this paper, we introduce an in-distillation selective adaptation framework for FedDG. It selectively fine-tunes unreliable outputs while directly distilling reliable ones from the vision-language model, effectively using knowledge distillation to address the challenge in FedDG. Furthermore, we propose a federated energy-driven reliability appraisal (FedReap) method to support this framework by appraising the reliability of outputs from the vision-language model. It includes hypersphere-constraint energy construction and label-guided energy partition. These two processes enable FedReap to acquire reliable and unreliable outputs for direct distillation and adaptation. In addition, FedReap employs a dual-level distillation strategy and a dual-stage adaptation strategy for distillation and adaptation. Extensive experiments on five datasets demonstrate the effectiveness of FedReap compared to twelve state-of-the-art methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103714"},"PeriodicalIF":15.5000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525007717","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Domain Generalization (FedDG) has attracted attention for its potential to enable privacy-preserving fusion of multi-source data. It aims to develop a global model in a distributed manner that generalizes to unseen clients. However, it faces the challenge of the tradeoff between inter-client and intra-client domain shifts. Knowledge distillation from the vision-language model may address this challenge by transferring its zero-shot generalization ability to client models. However, it may suffer from distribution discrepancies between the pretraining data of the vision-language model and the downstream data. Although pre-distillation fine-tuning may alleviate this issue in centralized settings, it may not be compatible with FedDG. In this paper, we introduce an in-distillation selective adaptation framework for FedDG. It selectively fine-tunes unreliable outputs while directly distilling reliable ones from the vision-language model, effectively using knowledge distillation to address the challenge in FedDG. Furthermore, we propose a federated energy-driven reliability appraisal (FedReap) method to support this framework by appraising the reliability of outputs from the vision-language model. It includes hypersphere-constraint energy construction and label-guided energy partition. These two processes enable FedReap to acquire reliable and unreliable outputs for direct distillation and adaptation. In addition, FedReap employs a dual-level distillation strategy and a dual-stage adaptation strategy for distillation and adaptation. Extensive experiments on five datasets demonstrate the effectiveness of FedReap compared to twelve state-of-the-art methods.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.