自动双级超参数调优的联邦学习

IF 2.7 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Rakib Ul Haque;Panagiotis Markopoulos
{"title":"自动双级超参数调优的联邦学习","authors":"Rakib Ul Haque;Panagiotis Markopoulos","doi":"10.1109/OJSP.2025.3578273","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a decentralized machine learning (ML) approach where multiple clients collaboratively train a shared model over several update rounds without exchanging local data. Similar to centralized learning, determining hyperparameters (HPs) like learning rate and batch size remains challenging yet critical for model performance. Current adaptive HP-tuning methods are often domain-specific and heavily influenced by initialization. Moreover, model accuracy often improves slowly, requiring many update rounds. This slow improvement is particularly problematic for FL, where each update round incurs high communication costs in addition to computation and energy costs. In this work, we introduce FLAUTO, the first method to perform dynamic HP-tuning simultaneously at both local (client) and global (server) levels. This dual-level adaptation directly addresses critical bottlenecks in FL, including slow convergence, client heterogeneity, and high communication costs, distinguishing it from existing approaches. FLAUTO leverages training loss and relative local model deviation as novel metrics, enabling robust and dynamic hyperparameter adjustments without reliance on initial guesses. By prioritizing high performance in early update rounds, FLAUTO significantly reduces communication and energy overhead—key challenges in FL deployments. Comprehensive experimental studies on image classification and object detection tasks demonstrate that FLAUTO consistently outperforms state-of-the-art methods, establishing its efficacy and broad applicability.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"795-802"},"PeriodicalIF":2.7000,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11029096","citationCount":"0","resultStr":"{\"title\":\"Federated Learning With Automated Dual-Level Hyperparameter Tuning\",\"authors\":\"Rakib Ul Haque;Panagiotis Markopoulos\",\"doi\":\"10.1109/OJSP.2025.3578273\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) is a decentralized machine learning (ML) approach where multiple clients collaboratively train a shared model over several update rounds without exchanging local data. Similar to centralized learning, determining hyperparameters (HPs) like learning rate and batch size remains challenging yet critical for model performance. Current adaptive HP-tuning methods are often domain-specific and heavily influenced by initialization. Moreover, model accuracy often improves slowly, requiring many update rounds. This slow improvement is particularly problematic for FL, where each update round incurs high communication costs in addition to computation and energy costs. In this work, we introduce FLAUTO, the first method to perform dynamic HP-tuning simultaneously at both local (client) and global (server) levels. This dual-level adaptation directly addresses critical bottlenecks in FL, including slow convergence, client heterogeneity, and high communication costs, distinguishing it from existing approaches. FLAUTO leverages training loss and relative local model deviation as novel metrics, enabling robust and dynamic hyperparameter adjustments without reliance on initial guesses. By prioritizing high performance in early update rounds, FLAUTO significantly reduces communication and energy overhead—key challenges in FL deployments. Comprehensive experimental studies on image classification and object detection tasks demonstrate that FLAUTO consistently outperforms state-of-the-art methods, establishing its efficacy and broad applicability.\",\"PeriodicalId\":73300,\"journal\":{\"name\":\"IEEE open journal of signal processing\",\"volume\":\"6 \",\"pages\":\"795-802\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11029096\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE open journal of signal processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11029096/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of signal processing","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11029096/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)是一种分散的机器学习(ML)方法,其中多个客户端在几轮更新中协作训练共享模型,而无需交换本地数据。与集中式学习类似,确定学习率和批处理大小等超参数(HPs)仍然具有挑战性,但对模型性能至关重要。当前的自适应hp调优方法通常是特定于领域的,并且受到初始化的严重影响。此外,模型精度通常提高缓慢,需要多次更新。对于FL来说,这种缓慢的改进尤其成问题,因为除了计算和能源成本之外,每个更新轮都会产生很高的通信成本。在这项工作中,我们介绍了FLAUTO,这是第一种在本地(客户端)和全局(服务器)级别同时执行动态hp调优的方法。这种双级适应直接解决了FL中的关键瓶颈,包括缓慢的收敛、客户端异构性和高通信成本,将其与现有方法区分开来。FLAUTO利用训练损失和相对局部模型偏差作为新指标,实现鲁棒和动态超参数调整,而不依赖于初始猜测。通过在早期更新中优先考虑高性能,FLAUTO显著降低了通信和能源开销——这是FL部署中的关键挑战。对图像分类和目标检测任务的综合实验研究表明,FLAUTO始终优于最先进的方法,建立了其有效性和广泛的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Federated Learning With Automated Dual-Level Hyperparameter Tuning
Federated Learning (FL) is a decentralized machine learning (ML) approach where multiple clients collaboratively train a shared model over several update rounds without exchanging local data. Similar to centralized learning, determining hyperparameters (HPs) like learning rate and batch size remains challenging yet critical for model performance. Current adaptive HP-tuning methods are often domain-specific and heavily influenced by initialization. Moreover, model accuracy often improves slowly, requiring many update rounds. This slow improvement is particularly problematic for FL, where each update round incurs high communication costs in addition to computation and energy costs. In this work, we introduce FLAUTO, the first method to perform dynamic HP-tuning simultaneously at both local (client) and global (server) levels. This dual-level adaptation directly addresses critical bottlenecks in FL, including slow convergence, client heterogeneity, and high communication costs, distinguishing it from existing approaches. FLAUTO leverages training loss and relative local model deviation as novel metrics, enabling robust and dynamic hyperparameter adjustments without reliance on initial guesses. By prioritizing high performance in early update rounds, FLAUTO significantly reduces communication and energy overhead—key challenges in FL deployments. Comprehensive experimental studies on image classification and object detection tasks demonstrate that FLAUTO consistently outperforms state-of-the-art methods, establishing its efficacy and broad applicability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.30
自引率
0.00%
发文量
0
审稿时长
22 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信