{"title":"FedDyH: A Multi-Policy with GA Optimization Framework for Dynamic Heterogeneous Federated Learning.","authors":"Xuhua Zhao, Yongming Zheng, Jiaxiang Wan, Yehong Li, Donglin Zhu, Zhenyu Xu, Huijuan Lu","doi":"10.3390/biomimetics10030185","DOIUrl":null,"url":null,"abstract":"<p><p>Federated learning (FL) is a distributed learning technique that ensures data privacy and has shown significant potential in cross-institutional image analysis. However, existing methods struggle with the inherent dynamic heterogeneity of real-world data, such as changes in cellular differentiation during disease progression or feature distribution shifts due to different imaging devices. This dynamic heterogeneity can cause catastrophic forgetting, leading to reduced performance in medical predictions across stages. Unlike previous federated learning studies that paid insufficient attention to dynamic heterogeneity, this paper proposes the FedDyH framework to address this challenge. Inspired by the adaptive regulation mechanisms of biological systems, this framework incorporates several core modules to tackle the issues arising from dynamic heterogeneity. First, the framework simulates intercellular information transfer through cross-client knowledge distillation, preserving local features while mitigating knowledge forgetting. Additionally, a dynamic regularization term is designed in which the strength can be adaptively adjusted based on real-world conditions. This mechanism resembles the role of regulatory T cells in the immune system, balancing global model convergence with local specificity adjustments to enhance the robustness of the global model while preventing interference from diverse client features. Finally, the framework introduces a genetic algorithm (GA) to simulate biological evolution, leveraging mechanisms such as gene selection, crossover, and mutation to optimize hyperparameter configurations. This enables the model to adaptively find the optimal hyperparameters in an ever-changing environment, thereby improving both adaptability and performance. Prior to this work, few studies have explored the use of optimization algorithms for hyperparameter tuning in federated learning. Experimental results demonstrate that the FedDyH framework improves accuracy compared to the SOTA baseline FedDecorr by 2.59%, 0.55%, and 5.79% on the MNIST, Fashion-MNIST, and CIFAR-10 benchmark datasets, respectively. This framework effectively addresses data heterogeneity issues in dynamic heterogeneous environments, providing an innovative solution for achieving more stable and accurate distributed federated learning.</p>","PeriodicalId":8907,"journal":{"name":"Biomimetics","volume":"10 3","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11940811/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomimetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/biomimetics10030185","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL) is a distributed learning technique that ensures data privacy and has shown significant potential in cross-institutional image analysis. However, existing methods struggle with the inherent dynamic heterogeneity of real-world data, such as changes in cellular differentiation during disease progression or feature distribution shifts due to different imaging devices. This dynamic heterogeneity can cause catastrophic forgetting, leading to reduced performance in medical predictions across stages. Unlike previous federated learning studies that paid insufficient attention to dynamic heterogeneity, this paper proposes the FedDyH framework to address this challenge. Inspired by the adaptive regulation mechanisms of biological systems, this framework incorporates several core modules to tackle the issues arising from dynamic heterogeneity. First, the framework simulates intercellular information transfer through cross-client knowledge distillation, preserving local features while mitigating knowledge forgetting. Additionally, a dynamic regularization term is designed in which the strength can be adaptively adjusted based on real-world conditions. This mechanism resembles the role of regulatory T cells in the immune system, balancing global model convergence with local specificity adjustments to enhance the robustness of the global model while preventing interference from diverse client features. Finally, the framework introduces a genetic algorithm (GA) to simulate biological evolution, leveraging mechanisms such as gene selection, crossover, and mutation to optimize hyperparameter configurations. This enables the model to adaptively find the optimal hyperparameters in an ever-changing environment, thereby improving both adaptability and performance. Prior to this work, few studies have explored the use of optimization algorithms for hyperparameter tuning in federated learning. Experimental results demonstrate that the FedDyH framework improves accuracy compared to the SOTA baseline FedDecorr by 2.59%, 0.55%, and 5.79% on the MNIST, Fashion-MNIST, and CIFAR-10 benchmark datasets, respectively. This framework effectively addresses data heterogeneity issues in dynamic heterogeneous environments, providing an innovative solution for achieving more stable and accurate distributed federated learning.