Nguyen Anh Tuan, Atif Rizwan, Sa Jim Soe Moe, Anam Nawaz Khan, Do Hyeun Kim
{"title":"数字孪生平台中基于对等权重机制和图神经网络的 DFL 拓扑优化","authors":"Nguyen Anh Tuan, Atif Rizwan, Sa Jim Soe Moe, Anam Nawaz Khan, Do Hyeun Kim","doi":"10.1007/s40747-025-01887-9","DOIUrl":null,"url":null,"abstract":"<p>Decentralized federated learning (DFL) represents a distributed learning framework where participating nodes independently train local models and exchange model updates with proximate peers, circumventing the reliance on a centralized orchestrator. This paradigm effectively mitigates server-induced bottlenecks and eliminates single points of failure, which are inherent limitations of centralized federated learning architectures. However, DFL encounters significant challenges in attaining global model convergence due to inherent statistical heterogeneity across nodes and the dynamic nature of network topologies. For the first time, in this paper, we present a topology optimization framework for DFL that integrates a peer weighting mechanism with graph neural networks (GNNs) within a digital twin platform. The proposed approach leverages local model performance metrics and training latency as input factors to dynamically construct an optimized topology that balances computational efficiency and model performance. Specifically, we employ Particle Swarm Optimization to derive node-specific peer weight matrices and utilize a GNN to refine the underlying mesh topology based on these weights. Comprehensive experimental analyses conducted on benchmark datasets demonstrate the superiority of the proposed framework in achieving accelerated convergence and enhanced accuracy across diverse nodes. Additionally, comparative evaluations under IID and Non-IID data distributions substantiate the robustness and adaptability of the approach in heterogeneous learning environments, underscoring its potential to advance decentralized learning paradigms.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"43 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DFL topology optimization based on peer weighting mechanism and graph neural network in digital twin platform\",\"authors\":\"Nguyen Anh Tuan, Atif Rizwan, Sa Jim Soe Moe, Anam Nawaz Khan, Do Hyeun Kim\",\"doi\":\"10.1007/s40747-025-01887-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Decentralized federated learning (DFL) represents a distributed learning framework where participating nodes independently train local models and exchange model updates with proximate peers, circumventing the reliance on a centralized orchestrator. This paradigm effectively mitigates server-induced bottlenecks and eliminates single points of failure, which are inherent limitations of centralized federated learning architectures. However, DFL encounters significant challenges in attaining global model convergence due to inherent statistical heterogeneity across nodes and the dynamic nature of network topologies. For the first time, in this paper, we present a topology optimization framework for DFL that integrates a peer weighting mechanism with graph neural networks (GNNs) within a digital twin platform. The proposed approach leverages local model performance metrics and training latency as input factors to dynamically construct an optimized topology that balances computational efficiency and model performance. Specifically, we employ Particle Swarm Optimization to derive node-specific peer weight matrices and utilize a GNN to refine the underlying mesh topology based on these weights. Comprehensive experimental analyses conducted on benchmark datasets demonstrate the superiority of the proposed framework in achieving accelerated convergence and enhanced accuracy across diverse nodes. Additionally, comparative evaluations under IID and Non-IID data distributions substantiate the robustness and adaptability of the approach in heterogeneous learning environments, underscoring its potential to advance decentralized learning paradigms.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"43 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-025-01887-9\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01887-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
DFL topology optimization based on peer weighting mechanism and graph neural network in digital twin platform
Decentralized federated learning (DFL) represents a distributed learning framework where participating nodes independently train local models and exchange model updates with proximate peers, circumventing the reliance on a centralized orchestrator. This paradigm effectively mitigates server-induced bottlenecks and eliminates single points of failure, which are inherent limitations of centralized federated learning architectures. However, DFL encounters significant challenges in attaining global model convergence due to inherent statistical heterogeneity across nodes and the dynamic nature of network topologies. For the first time, in this paper, we present a topology optimization framework for DFL that integrates a peer weighting mechanism with graph neural networks (GNNs) within a digital twin platform. The proposed approach leverages local model performance metrics and training latency as input factors to dynamically construct an optimized topology that balances computational efficiency and model performance. Specifically, we employ Particle Swarm Optimization to derive node-specific peer weight matrices and utilize a GNN to refine the underlying mesh topology based on these weights. Comprehensive experimental analyses conducted on benchmark datasets demonstrate the superiority of the proposed framework in achieving accelerated convergence and enhanced accuracy across diverse nodes. Additionally, comparative evaluations under IID and Non-IID data distributions substantiate the robustness and adaptability of the approach in heterogeneous learning environments, underscoring its potential to advance decentralized learning paradigms.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.