流数据的分散图联邦多任务学习

Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, A. Kuh
{"title":"流数据的分散图联邦多任务学习","authors":"Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, A. Kuh","doi":"10.1109/CISS53076.2022.9751160","DOIUrl":null,"url":null,"abstract":"In federated learning (FL), multiple clients connected to a single server train a global model based on locally stored data without revealing their data to the server or other clients. Nonetheless, the current FL architecture is highly vulnerable to communication failures and computational bottlenecks at the server. In response, a recent work proposed a multi-server federated architecture, namely, a graph federated learning architecture (GFL). However, existing work assumes a fixed amount of data at clients and the training of a single global model. This paper proposes a decentralized online multitask learning algorithm based on GFL (O-GFML). Clients update their local models using continuous streaming data while clients and multiple servers can train different but related models simul-taneously. Furthermore, to enhance the communication efficiency of O-GFML, we develop a partial-sharing-based O-GFML (PSO-GFML). The PSO-GFML allows participating clients to exchange only a portion of model parameters with their respective servers during a global iteration, while non-participating clients update their local models if they have access to new data. In the context of kernel regression, we show the mean convergence of the PSO-GFML. Experimental results show that PSO-GFML can achieve competitive performance with a considerably lower communication overhead than O-GFML.","PeriodicalId":305918,"journal":{"name":"2022 56th Annual Conference on Information Sciences and Systems (CISS)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Decentralized Graph Federated Multitask Learning for Streaming Data\",\"authors\":\"Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, A. Kuh\",\"doi\":\"10.1109/CISS53076.2022.9751160\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In federated learning (FL), multiple clients connected to a single server train a global model based on locally stored data without revealing their data to the server or other clients. Nonetheless, the current FL architecture is highly vulnerable to communication failures and computational bottlenecks at the server. In response, a recent work proposed a multi-server federated architecture, namely, a graph federated learning architecture (GFL). However, existing work assumes a fixed amount of data at clients and the training of a single global model. This paper proposes a decentralized online multitask learning algorithm based on GFL (O-GFML). Clients update their local models using continuous streaming data while clients and multiple servers can train different but related models simul-taneously. Furthermore, to enhance the communication efficiency of O-GFML, we develop a partial-sharing-based O-GFML (PSO-GFML). The PSO-GFML allows participating clients to exchange only a portion of model parameters with their respective servers during a global iteration, while non-participating clients update their local models if they have access to new data. In the context of kernel regression, we show the mean convergence of the PSO-GFML. Experimental results show that PSO-GFML can achieve competitive performance with a considerably lower communication overhead than O-GFML.\",\"PeriodicalId\":305918,\"journal\":{\"name\":\"2022 56th Annual Conference on Information Sciences and Systems (CISS)\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 56th Annual Conference on Information Sciences and Systems (CISS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CISS53076.2022.9751160\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 56th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS53076.2022.9751160","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

在联邦学习(FL)中,连接到单个服务器的多个客户端根据本地存储的数据训练全局模型,而无需向服务器或其他客户端透露它们的数据。尽管如此,当前的FL架构非常容易受到服务器端通信故障和计算瓶颈的影响。对此,最近的一项研究提出了一种多服务器联邦体系结构,即图联邦学习体系结构(GFL)。然而,现有的工作假设客户端有固定数量的数据,并且训练一个单一的全局模型。提出了一种基于GFL的分散在线多任务学习算法(O-GFML)。客户端使用连续流数据更新本地模型,而客户端和多个服务器可以同时训练不同但相关的模型。为了提高O-GFML的通信效率,我们开发了一种基于部分共享的O-GFML (PSO-GFML)。PSO-GFML允许参与的客户端在全局迭代期间仅与各自的服务器交换部分模型参数,而非参与的客户端如果有权访问新数据,则更新其本地模型。在核回归的背景下,我们展示了PSO-GFML的平均收敛性。实验结果表明,与O-GFML相比,PSO-GFML可以在较低的通信开销下获得具有竞争力的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Decentralized Graph Federated Multitask Learning for Streaming Data
In federated learning (FL), multiple clients connected to a single server train a global model based on locally stored data without revealing their data to the server or other clients. Nonetheless, the current FL architecture is highly vulnerable to communication failures and computational bottlenecks at the server. In response, a recent work proposed a multi-server federated architecture, namely, a graph federated learning architecture (GFL). However, existing work assumes a fixed amount of data at clients and the training of a single global model. This paper proposes a decentralized online multitask learning algorithm based on GFL (O-GFML). Clients update their local models using continuous streaming data while clients and multiple servers can train different but related models simul-taneously. Furthermore, to enhance the communication efficiency of O-GFML, we develop a partial-sharing-based O-GFML (PSO-GFML). The PSO-GFML allows participating clients to exchange only a portion of model parameters with their respective servers during a global iteration, while non-participating clients update their local models if they have access to new data. In the context of kernel regression, we show the mean convergence of the PSO-GFML. Experimental results show that PSO-GFML can achieve competitive performance with a considerably lower communication overhead than O-GFML.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信