Exploring System-Heterogeneous Federated Learning with Dynamic Model Selection

Dixi Yao
{"title":"Exploring System-Heterogeneous Federated Learning with Dynamic Model Selection","authors":"Dixi Yao","doi":"arxiv-2409.08858","DOIUrl":null,"url":null,"abstract":"Federated learning is a distributed learning paradigm in which multiple\nmobile clients train a global model while keeping data local. These mobile\nclients can have various available memory and network bandwidth. However, to\nachieve the best global model performance, how we can utilize available memory\nand network bandwidth to the maximum remains an open challenge. In this paper,\nwe propose to assign each client a subset of the global model, having different\nlayers and channels on each layer. To realize that, we design a constrained\nmodel search process with early stop to improve efficiency of finding the\nmodels from such a very large space; and a data-free knowledge distillation\nmechanism to improve the global model performance when aggregating models of\nsuch different structures. For fair and reproducible comparison between\ndifferent solutions, we develop a new system, which can directly allocate\ndifferent memory and bandwidth to each client according to memory and bandwidth\nlogs collected on mobile devices. The evaluation shows that our solution can\nhave accuracy increase ranging from 2.43\\% to 15.81\\% and provide 5\\% to 40\\%\nmore memory and bandwidth utilization with negligible extra running time,\ncomparing to existing state-of-the-art system-heterogeneous federated learning\nmethods under different available memory and bandwidth, non-i.i.d.~datasets,\nimage and text tasks.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"20 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08858","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning is a distributed learning paradigm in which multiple mobile clients train a global model while keeping data local. These mobile clients can have various available memory and network bandwidth. However, to achieve the best global model performance, how we can utilize available memory and network bandwidth to the maximum remains an open challenge. In this paper, we propose to assign each client a subset of the global model, having different layers and channels on each layer. To realize that, we design a constrained model search process with early stop to improve efficiency of finding the models from such a very large space; and a data-free knowledge distillation mechanism to improve the global model performance when aggregating models of such different structures. For fair and reproducible comparison between different solutions, we develop a new system, which can directly allocate different memory and bandwidth to each client according to memory and bandwidth logs collected on mobile devices. The evaluation shows that our solution can have accuracy increase ranging from 2.43\% to 15.81\% and provide 5\% to 40\% more memory and bandwidth utilization with negligible extra running time, comparing to existing state-of-the-art system-heterogeneous federated learning methods under different available memory and bandwidth, non-i.i.d.~datasets, image and text tasks.
利用动态模型选择探索系统异构联合学习
联盟学习是一种分布式学习范式,其中多个移动客户端在训练全局模型的同时保留本地数据。这些移动客户端可以拥有不同的可用内存和网络带宽。然而,要想获得最佳的全局模型性能,如何最大限度地利用可用内存和网络带宽仍然是一个有待解决的难题。在本文中,我们建议为每个客户端分配一个全局模型子集,每个子集有不同的层和通道。为实现这一目标,我们设计了一个早期停止的受限模型搜索过程,以提高从如此巨大的空间中找到模型的效率;还设计了一个无数据知识提炼机制,以提高聚合此类不同结构的模型时的全局模型性能。为了公平、可重复地比较不同的解决方案,我们开发了一个新系统,它可以根据移动设备上收集的内存和带宽日志,直接为每个客户端分配不同的内存和带宽。评估结果表明,在不同的可用内存和带宽、非i.i.d.数据集、图像和文本任务条件下,与现有的一流系统--异构联合学习方法相比,我们的解决方案可以提高2.43%到15.81%的准确率,并提高5%到40%的内存和带宽利用率,而额外的运行时间几乎可以忽略不计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信