数字孪生平台中基于对等权重机制和图神经网络的 DFL 拓扑优化

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Nguyen Anh Tuan, Atif Rizwan, Sa Jim Soe Moe, Anam Nawaz Khan, Do Hyeun Kim
{"title":"数字孪生平台中基于对等权重机制和图神经网络的 DFL 拓扑优化","authors":"Nguyen Anh Tuan, Atif Rizwan, Sa Jim Soe Moe, Anam Nawaz Khan, Do Hyeun Kim","doi":"10.1007/s40747-025-01887-9","DOIUrl":null,"url":null,"abstract":"<p>Decentralized federated learning (DFL) represents a distributed learning framework where participating nodes independently train local models and exchange model updates with proximate peers, circumventing the reliance on a centralized orchestrator. This paradigm effectively mitigates server-induced bottlenecks and eliminates single points of failure, which are inherent limitations of centralized federated learning architectures. However, DFL encounters significant challenges in attaining global model convergence due to inherent statistical heterogeneity across nodes and the dynamic nature of network topologies. For the first time, in this paper, we present a topology optimization framework for DFL that integrates a peer weighting mechanism with graph neural networks (GNNs) within a digital twin platform. The proposed approach leverages local model performance metrics and training latency as input factors to dynamically construct an optimized topology that balances computational efficiency and model performance. Specifically, we employ Particle Swarm Optimization to derive node-specific peer weight matrices and utilize a GNN to refine the underlying mesh topology based on these weights. Comprehensive experimental analyses conducted on benchmark datasets demonstrate the superiority of the proposed framework in achieving accelerated convergence and enhanced accuracy across diverse nodes. Additionally, comparative evaluations under IID and Non-IID data distributions substantiate the robustness and adaptability of the approach in heterogeneous learning environments, underscoring its potential to advance decentralized learning paradigms.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"43 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DFL topology optimization based on peer weighting mechanism and graph neural network in digital twin platform\",\"authors\":\"Nguyen Anh Tuan, Atif Rizwan, Sa Jim Soe Moe, Anam Nawaz Khan, Do Hyeun Kim\",\"doi\":\"10.1007/s40747-025-01887-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Decentralized federated learning (DFL) represents a distributed learning framework where participating nodes independently train local models and exchange model updates with proximate peers, circumventing the reliance on a centralized orchestrator. This paradigm effectively mitigates server-induced bottlenecks and eliminates single points of failure, which are inherent limitations of centralized federated learning architectures. However, DFL encounters significant challenges in attaining global model convergence due to inherent statistical heterogeneity across nodes and the dynamic nature of network topologies. For the first time, in this paper, we present a topology optimization framework for DFL that integrates a peer weighting mechanism with graph neural networks (GNNs) within a digital twin platform. The proposed approach leverages local model performance metrics and training latency as input factors to dynamically construct an optimized topology that balances computational efficiency and model performance. Specifically, we employ Particle Swarm Optimization to derive node-specific peer weight matrices and utilize a GNN to refine the underlying mesh topology based on these weights. Comprehensive experimental analyses conducted on benchmark datasets demonstrate the superiority of the proposed framework in achieving accelerated convergence and enhanced accuracy across diverse nodes. Additionally, comparative evaluations under IID and Non-IID data distributions substantiate the robustness and adaptability of the approach in heterogeneous learning environments, underscoring its potential to advance decentralized learning paradigms.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"43 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-025-01887-9\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01887-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

分散式联邦学习(DFL)代表了一种分布式学习框架,其中参与的节点独立地训练本地模型,并与邻近的节点交换模型更新,从而避免了对集中式编排器的依赖。这种范例有效地缓解了服务器导致的瓶颈,并消除了单点故障,这是集中式联邦学习体系结构的固有限制。然而,由于节点间固有的统计异质性和网络拓扑的动态特性,DFL在实现全局模型收敛方面遇到了重大挑战。在本文中,我们首次提出了一个DFL的拓扑优化框架,该框架将数字孪生平台中的对等加权机制与图神经网络(gnn)集成在一起。该方法利用局部模型性能指标和训练延迟作为输入因素,动态构建一个优化的拓扑,平衡计算效率和模型性能。具体来说,我们使用粒子群优化来推导节点特定的对等权重矩阵,并利用GNN基于这些权重来改进底层网格拓扑。在基准数据集上进行的综合实验分析表明,所提出的框架在实现跨不同节点的加速收敛和提高精度方面具有优势。此外,在IID和非IID数据分布下的比较评估证实了该方法在异构学习环境中的鲁棒性和适应性,强调了其推进分散学习范式的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DFL topology optimization based on peer weighting mechanism and graph neural network in digital twin platform

Decentralized federated learning (DFL) represents a distributed learning framework where participating nodes independently train local models and exchange model updates with proximate peers, circumventing the reliance on a centralized orchestrator. This paradigm effectively mitigates server-induced bottlenecks and eliminates single points of failure, which are inherent limitations of centralized federated learning architectures. However, DFL encounters significant challenges in attaining global model convergence due to inherent statistical heterogeneity across nodes and the dynamic nature of network topologies. For the first time, in this paper, we present a topology optimization framework for DFL that integrates a peer weighting mechanism with graph neural networks (GNNs) within a digital twin platform. The proposed approach leverages local model performance metrics and training latency as input factors to dynamically construct an optimized topology that balances computational efficiency and model performance. Specifically, we employ Particle Swarm Optimization to derive node-specific peer weight matrices and utilize a GNN to refine the underlying mesh topology based on these weights. Comprehensive experimental analyses conducted on benchmark datasets demonstrate the superiority of the proposed framework in achieving accelerated convergence and enhanced accuracy across diverse nodes. Additionally, comparative evaluations under IID and Non-IID data distributions substantiate the robustness and adaptability of the approach in heterogeneous learning environments, underscoring its potential to advance decentralized learning paradigms.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信