RLFL: A Reinforcement Learning Aggregation Approach for Hybrid Federated Learning Systems Using Full and Ternary Precision

IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
HamidReza Imani;Jeff Anderson;Samuel Farid;Abdolah Amirany;Tarek El-Ghazawi
{"title":"RLFL: A Reinforcement Learning Aggregation Approach for Hybrid Federated Learning Systems Using Full and Ternary Precision","authors":"HamidReza Imani;Jeff Anderson;Samuel Farid;Abdolah Amirany;Tarek El-Ghazawi","doi":"10.1109/JETCAS.2024.3483554","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) has emerged as an approach to provide a privacy-preserving and communication-efficient Machine Learning (ML) framework in mobile-edge environments which are likely to be resource-constrained and heterogeneous. Therefore, the required precision level and performance from each of the devices may vary depending upon the circumstances, giving rise to designs containing mixed-precision and quantized models. Among the various quantization schemes, binary and ternary representations are significant since they enable arrangements that can strike effective balances between performance and precision. In this paper, we propose RLFL, a hybrid ternary/full-precision FL system along with a Reinforcement Learning (RL) aggregation method with the goal of improved performance comparing to a homogeneous ternary environment. This system consists a mix of clients with full-precision and resource-constrained clients with ternary ML models. However, aggregating models with ternary and full-precision weights using traditional aggregation approaches present a challenge due to the disparity in weight magnitudes. In order to obtain an improved accuracy, we use a deep RL model to explore and optimize the amount of contribution assigned to each client’s model for aggregation in each iteration. We evaluate and compare accuracy and communication overhead of the proposed approach against the prior work for the classification of MNIST, FMNIST, and CIFAR10 datasets. Evaluation results show that the proposed RLFL system, along with its aggregation technique, outperforms the existing FL approaches in accuracy ranging from 5% to 19% while imposing negligible computation overhead.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"673-687"},"PeriodicalIF":3.7000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10721467/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) has emerged as an approach to provide a privacy-preserving and communication-efficient Machine Learning (ML) framework in mobile-edge environments which are likely to be resource-constrained and heterogeneous. Therefore, the required precision level and performance from each of the devices may vary depending upon the circumstances, giving rise to designs containing mixed-precision and quantized models. Among the various quantization schemes, binary and ternary representations are significant since they enable arrangements that can strike effective balances between performance and precision. In this paper, we propose RLFL, a hybrid ternary/full-precision FL system along with a Reinforcement Learning (RL) aggregation method with the goal of improved performance comparing to a homogeneous ternary environment. This system consists a mix of clients with full-precision and resource-constrained clients with ternary ML models. However, aggregating models with ternary and full-precision weights using traditional aggregation approaches present a challenge due to the disparity in weight magnitudes. In order to obtain an improved accuracy, we use a deep RL model to explore and optimize the amount of contribution assigned to each client’s model for aggregation in each iteration. We evaluate and compare accuracy and communication overhead of the proposed approach against the prior work for the classification of MNIST, FMNIST, and CIFAR10 datasets. Evaluation results show that the proposed RLFL system, along with its aggregation technique, outperforms the existing FL approaches in accuracy ranging from 5% to 19% while imposing negligible computation overhead.
基于全精度和三元精度的混合联邦学习系统的强化学习聚合方法
联邦学习(FL)是一种在移动边缘环境中提供隐私保护和通信效率高的机器学习(ML)框架的方法,这种环境可能是资源受限和异构的。因此,每个设备所需的精度水平和性能可能因情况而异,这就产生了包含混合精度和量化模型的设计。在各种量化方案中,二元和三元表示法具有重要意义,因为它们可以在性能和精度之间实现有效平衡。在本文中,我们提出了一种三元/全精度混合 FL 系统 RLFL 以及一种强化学习(RL)聚合方法,目的是与同质三元环境相比提高性能。该系统由使用全精度模型的客户端和使用三元 ML 模型的资源受限客户端混合组成。然而,由于权重大小的差异,使用传统聚合方法聚合具有三元和全精度权重的模型是一项挑战。为了提高准确性,我们使用深度 RL 模型来探索和优化分配给每个客户端模型的贡献量,以便在每次迭代中进行聚合。我们对 MNIST、FMNIST 和 CIFAR10 数据集的分类进行了评估,并将所提方法的准确率和通信开销与之前的工作进行了比较。评估结果表明,所提出的 RLFL 系统及其聚合技术在准确率方面优于现有的 FL 方法,准确率在 5% 到 19% 之间,而计算开销却微乎其微。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.50
自引率
2.20%
发文量
86
期刊介绍: The IEEE Journal on Emerging and Selected Topics in Circuits and Systems is published quarterly and solicits, with particular emphasis on emerging areas, special issues on topics that cover the entire scope of the IEEE Circuits and Systems (CAS) Society, namely the theory, analysis, design, tools, and implementation of circuits and systems, spanning their theoretical foundations, applications, and architectures for signal and information processing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信