电池供电移动设备上联邦学习的优化调度

Cong Wang, Xin Wei, Pengzhan Zhou
{"title":"电池供电移动设备上联邦学习的优化调度","authors":"Cong Wang, Xin Wei, Pengzhan Zhou","doi":"10.1109/IPDPS47924.2020.00031","DOIUrl":null,"url":null,"abstract":"Federated learning learns a collaborative model by aggregating locally-computed updates from mobile devices for privacy preservation. While current research typically prioritizing the minimization of communication overhead, we demonstrate from an empirical study, that computation heterogeneity is a more pronounced bottleneck on battery-powered mobile devices. Moreover, if class is unbalanced among the mobile devices, inappropriate selection of participants may adversely cause gradient divergence and accuracy loss. In this paper, we utilize data as a tunable knob to schedule training and achieve near-optimal solutions of computation time and accuracy loss. Based on the offline profiling, we formulate optimization problems and propose polynomial-time algorithms when data is class-balanced or unbalanced. We evaluate the optimization framework extensively on a mobile testbed with two datasets. Compared with common benchmarks of federated learning, our algorithms achieve 210× speedups with negligible accuracy loss. They also mitigate the impact from mobile stragglers and improve parallelism for federated learning.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"6 1","pages":"212-221"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Optimize Scheduling of Federated Learning on Battery-powered Mobile Devices\",\"authors\":\"Cong Wang, Xin Wei, Pengzhan Zhou\",\"doi\":\"10.1109/IPDPS47924.2020.00031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning learns a collaborative model by aggregating locally-computed updates from mobile devices for privacy preservation. While current research typically prioritizing the minimization of communication overhead, we demonstrate from an empirical study, that computation heterogeneity is a more pronounced bottleneck on battery-powered mobile devices. Moreover, if class is unbalanced among the mobile devices, inappropriate selection of participants may adversely cause gradient divergence and accuracy loss. In this paper, we utilize data as a tunable knob to schedule training and achieve near-optimal solutions of computation time and accuracy loss. Based on the offline profiling, we formulate optimization problems and propose polynomial-time algorithms when data is class-balanced or unbalanced. We evaluate the optimization framework extensively on a mobile testbed with two datasets. Compared with common benchmarks of federated learning, our algorithms achieve 210× speedups with negligible accuracy loss. They also mitigate the impact from mobile stragglers and improve parallelism for federated learning.\",\"PeriodicalId\":6805,\"journal\":{\"name\":\"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)\",\"volume\":\"6 1\",\"pages\":\"212-221\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IPDPS47924.2020.00031\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPS47924.2020.00031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

摘要

联邦学习通过聚合来自移动设备的本地计算更新来学习协作模型,以保护隐私。虽然目前的研究通常优先考虑最小化通信开销,但我们从实证研究中证明,计算异构是电池供电的移动设备上更明显的瓶颈。此外,如果移动设备之间的类别不平衡,参与者的选择不当可能会导致梯度发散和准确性损失。在本文中,我们利用数据作为一个可调节的旋钮来安排训练,并获得计算时间和精度损失的近最优解。在离线剖析的基础上,提出了类平衡和非类平衡时的优化问题和多项式时间算法。我们在两个数据集的移动测试平台上广泛评估了优化框架。与联邦学习的常见基准相比,我们的算法实现了210倍的加速,而精度损失可以忽略不计。它们还减轻了来自移动掉队者的影响,并提高了联邦学习的并行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Optimize Scheduling of Federated Learning on Battery-powered Mobile Devices
Federated learning learns a collaborative model by aggregating locally-computed updates from mobile devices for privacy preservation. While current research typically prioritizing the minimization of communication overhead, we demonstrate from an empirical study, that computation heterogeneity is a more pronounced bottleneck on battery-powered mobile devices. Moreover, if class is unbalanced among the mobile devices, inappropriate selection of participants may adversely cause gradient divergence and accuracy loss. In this paper, we utilize data as a tunable knob to schedule training and achieve near-optimal solutions of computation time and accuracy loss. Based on the offline profiling, we formulate optimization problems and propose polynomial-time algorithms when data is class-balanced or unbalanced. We evaluate the optimization framework extensively on a mobile testbed with two datasets. Compared with common benchmarks of federated learning, our algorithms achieve 210× speedups with negligible accuracy loss. They also mitigate the impact from mobile stragglers and improve parallelism for federated learning.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信