Federated Multi-task Graph Learning

Yijing Liu, Dongming Han, Jianwei Zhang, Haiyang Zhu, Mingliang Xu, Wei Chen
{"title":"Federated Multi-task Graph Learning","authors":"Yijing Liu, Dongming Han, Jianwei Zhang, Haiyang Zhu, Mingliang Xu, Wei Chen","doi":"10.1145/3527622","DOIUrl":null,"url":null,"abstract":"Distributed processing and analysis of large-scale graph data remain challenging because of the high-level discrepancy among graphs. This study investigates a novel subproblem: the distributed multi-task learning on the graph, which jointly learns multiple analysis tasks from decentralized graphs. We propose a federated multi-task graph learning (FMTGL) framework to solve the problem within a privacy-preserving and scalable scheme. Its core is an innovative data-fusion mechanism and a low-latency distributed optimization method. The former captures multi-source data relatedness and generates universal task representation for local task analysis. The latter enables the quick update of our framework with gradients sparsification and tree-based aggregation. As a theoretical result, the proposed optimization method has a convergence rate interpolates between \\( \\mathcal {O}(1/T) \\) and \\( \\mathcal {O}(1/\\sqrt {T}) \\) , up to logarithmic terms. Unlike previous studies, our work analyzes the convergence behavior with adaptive stepsize selection and non-convex assumption. Experimental results on three graph datasets verify the effectiveness and scalability of FMTGL.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Intelligent Systems and Technology (TIST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3527622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Distributed processing and analysis of large-scale graph data remain challenging because of the high-level discrepancy among graphs. This study investigates a novel subproblem: the distributed multi-task learning on the graph, which jointly learns multiple analysis tasks from decentralized graphs. We propose a federated multi-task graph learning (FMTGL) framework to solve the problem within a privacy-preserving and scalable scheme. Its core is an innovative data-fusion mechanism and a low-latency distributed optimization method. The former captures multi-source data relatedness and generates universal task representation for local task analysis. The latter enables the quick update of our framework with gradients sparsification and tree-based aggregation. As a theoretical result, the proposed optimization method has a convergence rate interpolates between \( \mathcal {O}(1/T) \) and \( \mathcal {O}(1/\sqrt {T}) \) , up to logarithmic terms. Unlike previous studies, our work analyzes the convergence behavior with adaptive stepsize selection and non-convex assumption. Experimental results on three graph datasets verify the effectiveness and scalability of FMTGL.
联邦多任务图学习
由于图之间的高度差异,大规模图数据的分布式处理和分析仍然具有挑战性。本文研究了一个新的子问题:图上的分布式多任务学习,即从分散的图中共同学习多个分析任务。我们提出了一个联邦多任务图学习(FMTGL)框架来解决隐私保护和可扩展方案中的问题。其核心是一种创新的数据融合机制和低延迟分布式优化方法。前者捕获多源数据相关性,并为本地任务分析生成通用任务表示。后者可以使用梯度稀疏化和基于树的聚合来快速更新我们的框架。理论结果表明,所提出的优化方法在\( \mathcal {O}(1/T) \)和\( \mathcal {O}(1/\sqrt {T}) \)之间具有收敛率插值,直至对数项。与以往的研究不同,我们的工作分析了自适应步长选择和非凸假设的收敛行为。在三个图数据集上的实验结果验证了FMTGL的有效性和可扩展性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信