Dynamic Activation of Clients and Parameters for Federated Learning over Heterogeneous Graphs

Zishan Gu, Ke Zhang, Liang Chen, Sun
{"title":"Dynamic Activation of Clients and Parameters for Federated Learning over Heterogeneous Graphs","authors":"Zishan Gu, Ke Zhang, Liang Chen, Sun","doi":"10.1109/ICDE55515.2023.00126","DOIUrl":null,"url":null,"abstract":"The data generated in many real-world applications can be modeled as heterogeneous graphs of multi-typed entities (nodes) and relations (links). Nowadays, such data are commonly generated and stored by distributed clients, making direct centralized model training unpractical. While the data in each client are prone to biased local distributions, generalizable global models are still in frequent need for large-scale applications. However, the large number of clients enforce significant computational overhead due to the communication and synchronization among the clients, whereas the biased local data distributions indicate that not all clients and parameters should be computed and updated at all times. Motivated by specifically designed preliminary studies on training a state-of-the-art heterogeneous graph neural network (HGN) with the vanilla FedAvg framework, in this work, we propose to leverage the characteristics of heterogeneous graphs by designing dynamic activation strategies for the clients and parameters during the federated training of HGN, named FedDA. Moreover, we design a novel disentangled model D-HGN to enable type-oriented activation of model parameters for FedDA. The effectiveness and efficiency of our proposed techniques are backed by both theoretical and empirical analysis– We theoretically analyze the validity and convergence of FedDA and mathematically illustrate its efficiency gain; meanwhile, we demonstrate the significant performance gains of FedDA and corroborate its efficiency gains with extensive experiments over multiple realistic FL settings synthesized based on real-world heterogeneous graphs.","PeriodicalId":434744,"journal":{"name":"2023 IEEE 39th International Conference on Data Engineering (ICDE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 39th International Conference on Data Engineering (ICDE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDE55515.2023.00126","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The data generated in many real-world applications can be modeled as heterogeneous graphs of multi-typed entities (nodes) and relations (links). Nowadays, such data are commonly generated and stored by distributed clients, making direct centralized model training unpractical. While the data in each client are prone to biased local distributions, generalizable global models are still in frequent need for large-scale applications. However, the large number of clients enforce significant computational overhead due to the communication and synchronization among the clients, whereas the biased local data distributions indicate that not all clients and parameters should be computed and updated at all times. Motivated by specifically designed preliminary studies on training a state-of-the-art heterogeneous graph neural network (HGN) with the vanilla FedAvg framework, in this work, we propose to leverage the characteristics of heterogeneous graphs by designing dynamic activation strategies for the clients and parameters during the federated training of HGN, named FedDA. Moreover, we design a novel disentangled model D-HGN to enable type-oriented activation of model parameters for FedDA. The effectiveness and efficiency of our proposed techniques are backed by both theoretical and empirical analysis– We theoretically analyze the validity and convergence of FedDA and mathematically illustrate its efficiency gain; meanwhile, we demonstrate the significant performance gains of FedDA and corroborate its efficiency gains with extensive experiments over multiple realistic FL settings synthesized based on real-world heterogeneous graphs.
异构图上联邦学习的客户端和参数的动态激活
在许多实际应用程序中生成的数据可以建模为多类型实体(节点)和关系(链接)的异构图。目前,这些数据通常是由分布式客户端生成和存储的,这使得直接集中的模型训练变得不切实际。虽然每个客户机中的数据都倾向于有偏差的局部分布,但大规模应用程序仍然经常需要可推广的全局模型。然而,由于客户端之间的通信和同步,大量的客户端增加了大量的计算开销,而有偏差的本地数据分布表明,并不是所有的客户端和参数都应该随时计算和更新。基于对基于fedag框架的异质图神经网络(HGN)训练的初步研究,本文提出利用异质图的特点,在HGN的联邦训练过程中,为客户端和参数设计动态激活策略,并将其命名为fedag。此外,我们设计了一种新的解纠缠模型D-HGN,实现了面向类型的模型参数激活。我们提出的技术的有效性和效率得到了理论和实证分析的支持——我们从理论上分析了FedDA的有效性和收敛性,并从数学上说明了其效率增益;同时,我们展示了FedDA的显著性能提升,并通过基于真实异构图合成的多个真实FL设置的大量实验证实了其效率提升。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信