A Federated Deep Reinforcement Learning-based Low-power Caching Strategy for Cloud-edge Collaboration

IF 4.3 3区 材料科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Xinyu Zhang, Zhigang Hu, Yang Liang, Hui Xiao, Aikun Xu, Meiguang Zheng, Chuan Sun
{"title":"A Federated Deep Reinforcement Learning-based Low-power Caching Strategy for Cloud-edge Collaboration","authors":"Xinyu Zhang, Zhigang Hu, Yang Liang, Hui Xiao, Aikun Xu, Meiguang Zheng, Chuan Sun","doi":"10.1007/s10723-023-09730-6","DOIUrl":null,"url":null,"abstract":"<p>In the era of ubiquitous network devices, an exponential increase in content requests from user equipment (UE) calls for optimized caching strategies within a cloud-edge integration. This approach is critical to handling large numbers of requests. To enhance caching efficiency, federated deep reinforcement learning (FDRL) is widely used to adjust caching policies. Nonetheless, for improved adaptability in dynamic scenarios, FDRL generally demands extended and online deep training, incurring a notable energy overhead when contrasted with rule-based approaches. With the aim of achieving a harmony between caching efficiency and training energy expenditure, we integrate a content request latency model, a deep reinforcement learning model based on markov decision processes (MDP), and a two-stage training energy consumption model. Together, these components define a new average delay and training energy gain (ADTEG) challenge. To address this challenge, we put forth a innovative dynamic federated optimization strategy. This approach refines the pre-training phase through the use of cluster-based strategies and parameter transfer methodologies. The online training phase is improved through a dynamic federated framework and an adaptive local iteration count. The experimental findings affirm that our proposed methodology reduces the training energy outlay while maintaining caching efficacy.</p>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10723-023-09730-6","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

In the era of ubiquitous network devices, an exponential increase in content requests from user equipment (UE) calls for optimized caching strategies within a cloud-edge integration. This approach is critical to handling large numbers of requests. To enhance caching efficiency, federated deep reinforcement learning (FDRL) is widely used to adjust caching policies. Nonetheless, for improved adaptability in dynamic scenarios, FDRL generally demands extended and online deep training, incurring a notable energy overhead when contrasted with rule-based approaches. With the aim of achieving a harmony between caching efficiency and training energy expenditure, we integrate a content request latency model, a deep reinforcement learning model based on markov decision processes (MDP), and a two-stage training energy consumption model. Together, these components define a new average delay and training energy gain (ADTEG) challenge. To address this challenge, we put forth a innovative dynamic federated optimization strategy. This approach refines the pre-training phase through the use of cluster-based strategies and parameter transfer methodologies. The online training phase is improved through a dynamic federated framework and an adaptive local iteration count. The experimental findings affirm that our proposed methodology reduces the training energy outlay while maintaining caching efficacy.

基于深度强化学习的联盟式低功耗缓存策略,适用于云边缘协作
在网络设备无处不在的时代,来自用户设备(UE)的内容请求呈指数级增长,这就要求在云边缘集成中采用优化的缓存策略。这种方法对于处理大量请求至关重要。为了提高缓存效率,联合深度强化学习(FDRL)被广泛用于调整缓存策略。然而,为了提高动态场景中的适应性,FDRL 通常需要扩展和在线深度训练,与基于规则的方法相比,会产生显著的能量开销。为了实现缓存效率和训练能耗之间的协调,我们整合了内容请求延迟模型、基于马尔可夫决策过程(MDP)的深度强化学习模型和两阶段训练能耗模型。这些部分共同定义了一个新的平均延迟和训练能量增益(ADTEG)挑战。为应对这一挑战,我们提出了一种创新的动态联合优化策略。这种方法通过使用基于集群的策略和参数转移方法来完善预训练阶段。通过动态联合框架和自适应局部迭代次数,在线训练阶段得到了改进。实验结果证实,我们提出的方法既减少了训练能量消耗,又保持了缓存功效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.20
自引率
4.30%
发文量
567
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信