Loss and Energy Tradeoff in Multi-access Edge Computing Enabled Federated Learning

Chit Wutyee Zaw, C. Hong
{"title":"Loss and Energy Tradeoff in Multi-access Edge Computing Enabled Federated Learning","authors":"Chit Wutyee Zaw, C. Hong","doi":"10.1109/ICOIN50884.2021.9333972","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) encourages users to train statistical models on their local devices. Since mobile devices have the limited power and computing capabilities, the users are rational in minimizing their energy consumption with the cost of the model’s accuracy. Multi-access Edge Computing (MEC) enabled FL is a prominent approach where users can offload a fraction of their dataset to the MEC server where the training of the statistical model is performed with the help of the powerful MEC server in parallel with the local training at the mobile users. With the size of dataset offloaded to the MEC server, both the performance of the model and the energy consumption of the system are varied. We analyze this tradeoff between the performance of the system and the energy consumption at the MEC server and mobile users. The time consumption can also be saved by managing the size of the dataset offloaded to the MEC server. Since the MEC server and mobile users have the conflicting interest in saving the energy consumption with the constraint on the time taken for one computing round where the performance of the model fluctuates across the size of offloaded dataset, we analyze the tradeoff by formulating the resource management problem as a penalized convex optimization problem. We propose a distributed resource management problem for MEC enabled FL system where the global model is responsible for radio resource management and each local model performs a dataset offloading decision. Then, we perform the simulation to show the tradeoff and performance of the proposed algorithm.","PeriodicalId":6741,"journal":{"name":"2021 International Conference on Information Networking (ICOIN)","volume":"42 1","pages":"597-602"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Information Networking (ICOIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOIN50884.2021.9333972","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Federated learning (FL) encourages users to train statistical models on their local devices. Since mobile devices have the limited power and computing capabilities, the users are rational in minimizing their energy consumption with the cost of the model’s accuracy. Multi-access Edge Computing (MEC) enabled FL is a prominent approach where users can offload a fraction of their dataset to the MEC server where the training of the statistical model is performed with the help of the powerful MEC server in parallel with the local training at the mobile users. With the size of dataset offloaded to the MEC server, both the performance of the model and the energy consumption of the system are varied. We analyze this tradeoff between the performance of the system and the energy consumption at the MEC server and mobile users. The time consumption can also be saved by managing the size of the dataset offloaded to the MEC server. Since the MEC server and mobile users have the conflicting interest in saving the energy consumption with the constraint on the time taken for one computing round where the performance of the model fluctuates across the size of offloaded dataset, we analyze the tradeoff by formulating the resource management problem as a penalized convex optimization problem. We propose a distributed resource management problem for MEC enabled FL system where the global model is responsible for radio resource management and each local model performs a dataset offloading decision. Then, we perform the simulation to show the tradeoff and performance of the proposed algorithm.
基于多访问边缘计算的联邦学习中的损耗和能量权衡
联邦学习(FL)鼓励用户在本地设备上训练统计模型。由于移动设备的功率和计算能力有限,因此用户会理性地以模型的准确性为代价,尽量减少自己的能量消耗。支持多访问边缘计算(MEC)的FL是一种突出的方法,用户可以将其数据集的一小部分卸载到MEC服务器,在MEC服务器的帮助下,统计模型的训练与移动用户的本地训练并行执行。随着数据集的大小卸载到MEC服务器上,模型的性能和系统的能耗都会发生变化。我们分析了系统性能与MEC服务器和移动用户的能耗之间的权衡。还可以通过管理卸载到MEC服务器的数据集的大小来节省时间。由于MEC服务器和移动用户在节省能源消耗方面存在利益冲突,并且对一个计算轮的时间有限制,其中模型的性能在卸载数据集的大小上波动,因此我们通过将资源管理问题表述为惩罚凸优化问题来分析权衡。我们提出了一个用于支持MEC的FL系统的分布式资源管理问题,其中全局模型负责无线电资源管理,每个本地模型执行数据集卸载决策。然后,我们进行了仿真,以展示所提出算法的权衡和性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信