Deep Reinforcement Learning for Computation Offloading and Caching in Fog-Based Vehicular Networks

Dapeng Lan, Amirhosein Taherkordi, F. Eliassen, Lei Liu
{"title":"Deep Reinforcement Learning for Computation Offloading and Caching in Fog-Based Vehicular Networks","authors":"Dapeng Lan, Amirhosein Taherkordi, F. Eliassen, Lei Liu","doi":"10.1109/MASS50613.2020.00081","DOIUrl":null,"url":null,"abstract":"The role of fog computing in future vehicular networks is becoming significant, enabling a variety of applications that demand high computing resources and low latency, such as augmented reality and autonomous driving. Fog-based computation offloading and service caching are considered two key factors in efficient execution of resource-demanding services in such applications. While some efforts have been made on computation offloading in fog computing, a limited amount of work has considered joint optimization of computation offloading and service caching. As fog platforms are usually equipped with moderate computing and storage resources, we need to judiciously decide which services to be cached when offloading computation tasks to maximize the system performance. The heterogeneity, dynamicity, and stochastic properties of vehicular networks also pose challenges on optimal offloading and resource allocation. In this paper, we propose an intelligent computation offloading architecture with service caching, considering both peer-pool and fog-pool computation offloading. An optimization problem of joint computation offloading and service caching is formulated to minimize the task processing time and long-term energy utilization. Finally, we propose an algorithm based on deep reinforcement learning to solve this complex optimization problem. Extensive simulations are undertaken to verify the feasibility of our proposed scheme. The results show that our proposed scheme exhibits an effective performance improvement in computation latency and energy consumption compared to the chosen baseline.","PeriodicalId":105795,"journal":{"name":"2020 IEEE 17th International Conference on Mobile Ad Hoc and Sensor Systems (MASS)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 17th International Conference on Mobile Ad Hoc and Sensor Systems (MASS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MASS50613.2020.00081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

The role of fog computing in future vehicular networks is becoming significant, enabling a variety of applications that demand high computing resources and low latency, such as augmented reality and autonomous driving. Fog-based computation offloading and service caching are considered two key factors in efficient execution of resource-demanding services in such applications. While some efforts have been made on computation offloading in fog computing, a limited amount of work has considered joint optimization of computation offloading and service caching. As fog platforms are usually equipped with moderate computing and storage resources, we need to judiciously decide which services to be cached when offloading computation tasks to maximize the system performance. The heterogeneity, dynamicity, and stochastic properties of vehicular networks also pose challenges on optimal offloading and resource allocation. In this paper, we propose an intelligent computation offloading architecture with service caching, considering both peer-pool and fog-pool computation offloading. An optimization problem of joint computation offloading and service caching is formulated to minimize the task processing time and long-term energy utilization. Finally, we propose an algorithm based on deep reinforcement learning to solve this complex optimization problem. Extensive simulations are undertaken to verify the feasibility of our proposed scheme. The results show that our proposed scheme exhibits an effective performance improvement in computation latency and energy consumption compared to the chosen baseline.
基于雾的车辆网络计算卸载和缓存的深度强化学习
雾计算在未来车辆网络中的作用变得越来越重要,可以实现各种需要高计算资源和低延迟的应用,例如增强现实和自动驾驶。在这类应用程序中,基于雾的计算卸载和服务缓存被认为是有效执行资源要求高的服务的两个关键因素。虽然在雾计算中的计算卸载方面已经做了一些努力,但考虑到计算卸载和服务缓存的联合优化的工作数量有限。由于雾平台通常配备的计算和存储资源适中,因此在卸载计算任务时需要明智地决定缓存哪些服务,以最大限度地提高系统性能。车辆网络的异质性、动态性和随机性也对优化卸载和资源分配提出了挑战。在本文中,我们提出了一种基于服务缓存的智能计算卸载架构,同时考虑了点池和雾池计算卸载。以最小化任务处理时间和长期能量利用率为目标,提出了计算卸载和服务缓存的联合优化问题。最后,我们提出了一种基于深度强化学习的算法来解决这个复杂的优化问题。进行了大量的模拟以验证我们所提出方案的可行性。结果表明,与选择的基准相比,我们提出的方案在计算延迟和能耗方面有了有效的性能改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信