IoT-Cache: Caching Transient Data at the IoT Edge

S. Sharma, S. K. Peddoju
{"title":"IoT-Cache: Caching Transient Data at the IoT Edge","authors":"S. Sharma, S. K. Peddoju","doi":"10.1109/LCN53696.2022.9843211","DOIUrl":null,"url":null,"abstract":"Explosive traffic and service delay are bottlenecks in providing Quality of Service (QoS) to the Internet of Things (IoT) end-users. Edge caching emerged as a promising solution, but data transiency, limited caching capability, and network volatility trigger the dimensionality curse. Therefore, we propose a Deep Reinforcement Learning (DRL) approach, named IoT-Cache, to caching action optimization. An appropriate reward function is designed to increase the cache hit rate and optimize the overall data-cache allocation. A practical scenario with inconsistent requests and data item sizes is considered, and a Distributed Proximal Policy Optimization (DPPO) algorithm is proposed, enabling IoT edge nodes to learn caching policy. RLlib framework is used to scale the training in distributed Publish/Subscribe network. The performance evaluation demonstrates a significant improvement and faster convergence for IoT-Cache cost function, a trade-off between communication cost and data freshness over existing DRL and baseline caching solutions.","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LCN53696.2022.9843211","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Explosive traffic and service delay are bottlenecks in providing Quality of Service (QoS) to the Internet of Things (IoT) end-users. Edge caching emerged as a promising solution, but data transiency, limited caching capability, and network volatility trigger the dimensionality curse. Therefore, we propose a Deep Reinforcement Learning (DRL) approach, named IoT-Cache, to caching action optimization. An appropriate reward function is designed to increase the cache hit rate and optimize the overall data-cache allocation. A practical scenario with inconsistent requests and data item sizes is considered, and a Distributed Proximal Policy Optimization (DPPO) algorithm is proposed, enabling IoT edge nodes to learn caching policy. RLlib framework is used to scale the training in distributed Publish/Subscribe network. The performance evaluation demonstrates a significant improvement and faster convergence for IoT-Cache cost function, a trade-off between communication cost and data freshness over existing DRL and baseline caching solutions.
IoT- cache:在IoT边缘缓存瞬态数据
爆炸性流量和业务延迟是物联网终端用户服务质量(QoS)的瓶颈。边缘缓存作为一种很有前途的解决方案出现了,但是数据的瞬态性、有限的缓存能力和网络的波动性引发了维度诅咒。因此,我们提出了一种名为IoT-Cache的深度强化学习(DRL)方法来缓存操作优化。设计适当的奖励函数来提高缓存命中率并优化总体数据缓存分配。考虑了请求不一致和数据项大小不一致的实际场景,提出了分布式近端策略优化(DPPO)算法,使物联网边缘节点能够学习缓存策略。采用RLlib框架对分布式发布/订阅网络中的训练进行扩展。性能评估显示了物联网缓存成本函数的显著改进和更快的收敛,这是现有DRL和基线缓存解决方案在通信成本和数据新鲜度之间的权衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信