{"title":"联邦领域泛化:综述","authors":"Ying Li;Xingwei Wang;Rongfei Zeng;Praveen Kumar Donta;Ilir Murturi;Min Huang;Schahram Dustdar","doi":"10.1109/JPROC.2025.3596173","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) typically relies on the assumption that training and testing distributions are identical and that data are centrally stored for training and testing. However, in real-world scenarios, distributions may differ significantly, and data are often distributed across different devices, organizations, or edge nodes. Consequently, it is to develop models capable of effectively generalizing across unseen distributions in data spanning various domains. In response to this challenge, there has been a surge of interest in federated domain generalization (FDG) in recent years. FDG synergizes federated learning (FL) and domain generalization (DG) techniques, facilitating collaborative model development across diverse source domains for effective generalization to unseen domains, all while maintaining data privacy. However, generalizing the federated model under domain shifts remains a complex, underexplored issue. This article provides a comprehensive survey of the latest advancements in this field. Initially, we discuss the development process from traditional ML to domain adaptation (DA) and DG, leading to FDG, as well as provide the corresponding formal definition. Subsequently, we classify recent methodologies into four distinct categories: federated domain alignment (FDAL), data manipulation (DM), learning strategies (LSs), and aggregation optimization (AO), detailing appropriate algorithms for each. We then overview commonly utilized datasets, applications, evaluations, and benchmarks. Conclusively, this survey outlines potential future research directions.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"113 4","pages":"370-410"},"PeriodicalIF":25.9000,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated Domain Generalization: A Survey\",\"authors\":\"Ying Li;Xingwei Wang;Rongfei Zeng;Praveen Kumar Donta;Ilir Murturi;Min Huang;Schahram Dustdar\",\"doi\":\"10.1109/JPROC.2025.3596173\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning (ML) typically relies on the assumption that training and testing distributions are identical and that data are centrally stored for training and testing. However, in real-world scenarios, distributions may differ significantly, and data are often distributed across different devices, organizations, or edge nodes. Consequently, it is to develop models capable of effectively generalizing across unseen distributions in data spanning various domains. In response to this challenge, there has been a surge of interest in federated domain generalization (FDG) in recent years. FDG synergizes federated learning (FL) and domain generalization (DG) techniques, facilitating collaborative model development across diverse source domains for effective generalization to unseen domains, all while maintaining data privacy. However, generalizing the federated model under domain shifts remains a complex, underexplored issue. This article provides a comprehensive survey of the latest advancements in this field. Initially, we discuss the development process from traditional ML to domain adaptation (DA) and DG, leading to FDG, as well as provide the corresponding formal definition. Subsequently, we classify recent methodologies into four distinct categories: federated domain alignment (FDAL), data manipulation (DM), learning strategies (LSs), and aggregation optimization (AO), detailing appropriate algorithms for each. We then overview commonly utilized datasets, applications, evaluations, and benchmarks. Conclusively, this survey outlines potential future research directions.\",\"PeriodicalId\":20556,\"journal\":{\"name\":\"Proceedings of the IEEE\",\"volume\":\"113 4\",\"pages\":\"370-410\"},\"PeriodicalIF\":25.9000,\"publicationDate\":\"2025-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the IEEE\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11130884/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the IEEE","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11130884/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Machine learning (ML) typically relies on the assumption that training and testing distributions are identical and that data are centrally stored for training and testing. However, in real-world scenarios, distributions may differ significantly, and data are often distributed across different devices, organizations, or edge nodes. Consequently, it is to develop models capable of effectively generalizing across unseen distributions in data spanning various domains. In response to this challenge, there has been a surge of interest in federated domain generalization (FDG) in recent years. FDG synergizes federated learning (FL) and domain generalization (DG) techniques, facilitating collaborative model development across diverse source domains for effective generalization to unseen domains, all while maintaining data privacy. However, generalizing the federated model under domain shifts remains a complex, underexplored issue. This article provides a comprehensive survey of the latest advancements in this field. Initially, we discuss the development process from traditional ML to domain adaptation (DA) and DG, leading to FDG, as well as provide the corresponding formal definition. Subsequently, we classify recent methodologies into four distinct categories: federated domain alignment (FDAL), data manipulation (DM), learning strategies (LSs), and aggregation optimization (AO), detailing appropriate algorithms for each. We then overview commonly utilized datasets, applications, evaluations, and benchmarks. Conclusively, this survey outlines potential future research directions.
期刊介绍:
Proceedings of the IEEE is the leading journal to provide in-depth review, survey, and tutorial coverage of the technical developments in electronics, electrical and computer engineering, and computer science. Consistently ranked as one of the top journals by Impact Factor, Article Influence Score and more, the journal serves as a trusted resource for engineers around the world.