AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks

Chao Fang;Xiangheng Meng;Zhaoming Hu;Fangmin Xu;Deze Zeng;Mianxiong Dong;Wei Ni
{"title":"AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks","authors":"Chao Fang;Xiangheng Meng;Zhaoming Hu;Fangmin Xu;Deze Zeng;Mianxiong Dong;Wei Ni","doi":"10.1109/OJCS.2022.3206446","DOIUrl":null,"url":null,"abstract":"To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"3 ","pages":"162-171"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8782664/9682503/09891792.pdf","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/9891792/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state.
云端协作网络中人工智能驱动的节能内容任务卸载
为了解决移动互联网流量增长带来的具有挑战性的能效问题,本文提出了一种基于深度强化学习(DRL)的云边缘端协作网络中的绿色内容任务卸载方案。具体来说,我们将该问题公式化为功率最小化模型,在该模型中,到达节点的对相同内容的请求可以在其队列中聚合,并且在异构环境中广泛部署网络内缓存。设计了一种新的DRL算法,通过基于先前时隙中的内容请求信息和当前网络状态在每个时隙中进行协作缓存和任务卸载决策,来最大限度地降低功耗。数值结果表明,在云边缘端协作网络中,我们提出的内容任务卸载模型比现有的流行模型实现了更好的功率效率,并快速收敛到稳定状态。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
12.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信