Adaptive Replacement Cache Policy in Named Data Networking

Prajjwal Singh, Rajneesh Kumar, Saurabh Kannaujia, N. Sarma
{"title":"Adaptive Replacement Cache Policy in Named Data Networking","authors":"Prajjwal Singh, Rajneesh Kumar, Saurabh Kannaujia, N. Sarma","doi":"10.1109/CONIT51480.2021.9498489","DOIUrl":null,"url":null,"abstract":"The traditional IP-based internet architecture is host-oriented and was built over the old ideology of telephony systems. Named Data Networking (NDN), which is based on Content Centric Networking (CCN), is an enhancement, or rather an alternative, to the IP based networking architecture. NDN architecture allows caching of network data packets at the routers to facilitate satisfaction of interests shown by multiple hosts. Therefore, a caching scheme plays a vital role in the network’s performance. Least Recently Used (LRU) and Priority-Based First-In First-Out (FIFO) are cache eviction policies in NDN Forwarding Daemon (NFD). However, both approaches do not give weightage to the frequency of requested data packets during the eviction and are not scan resistant, which could be an important feature in a CCN system. Other policies like Least Recently Frequently Used (LRFU) subsumes LRU and LFU policies but requires tuning parameters and may not always perform best in dynamic network traffic conditions. In this paper, we have implemented the Adaptive Replacement Cache (ARC) Algorithm, which is a scan resistant, self-tuning, and LRU and LFU subsuming cache replacement policy in the ndnSIM simulator and compared the hit rate performance of ARC with the LRU replacement policy. As Content Store size affects the overall performance of NDN, we have proved by simulation that ARC requires smaller Content Store size than LRU. We conducted a simulation study by varying the Grid Topology, Content Store, and Interest Rate size. Simulation results reveal that ARC replacement policy outperforms LRU replacement policy by achieving a 4% higher hit rate. We have also observed that ARC requires a smaller content store size than LRU to reach the 74% hit rate.","PeriodicalId":426131,"journal":{"name":"2021 International Conference on Intelligent Technologies (CONIT)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Intelligent Technologies (CONIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONIT51480.2021.9498489","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The traditional IP-based internet architecture is host-oriented and was built over the old ideology of telephony systems. Named Data Networking (NDN), which is based on Content Centric Networking (CCN), is an enhancement, or rather an alternative, to the IP based networking architecture. NDN architecture allows caching of network data packets at the routers to facilitate satisfaction of interests shown by multiple hosts. Therefore, a caching scheme plays a vital role in the network’s performance. Least Recently Used (LRU) and Priority-Based First-In First-Out (FIFO) are cache eviction policies in NDN Forwarding Daemon (NFD). However, both approaches do not give weightage to the frequency of requested data packets during the eviction and are not scan resistant, which could be an important feature in a CCN system. Other policies like Least Recently Frequently Used (LRFU) subsumes LRU and LFU policies but requires tuning parameters and may not always perform best in dynamic network traffic conditions. In this paper, we have implemented the Adaptive Replacement Cache (ARC) Algorithm, which is a scan resistant, self-tuning, and LRU and LFU subsuming cache replacement policy in the ndnSIM simulator and compared the hit rate performance of ARC with the LRU replacement policy. As Content Store size affects the overall performance of NDN, we have proved by simulation that ARC requires smaller Content Store size than LRU. We conducted a simulation study by varying the Grid Topology, Content Store, and Interest Rate size. Simulation results reveal that ARC replacement policy outperforms LRU replacement policy by achieving a 4% higher hit rate. We have also observed that ARC requires a smaller content store size than LRU to reach the 74% hit rate.
命名数据组网中的自适应替换缓存策略
传统的基于ip的互联网架构是面向主机的,是建立在旧的电话系统意识形态之上的。命名数据网络(NDN)基于内容中心网络(CCN),是对基于IP的网络体系结构的增强,或者说是替代方案。NDN架构允许在路由器上缓存网络数据包,以方便满足多个主机显示的兴趣。因此,缓存方案对网络的性能起着至关重要的作用。LRU (Least Recently Used)和FIFO (First-In - First-Out)是NDN转发守护进程(NFD)中的缓存回收策略。然而,这两种方法在移除过程中都没有给请求数据包的频率赋予权重,并且不耐扫描,这可能是CCN系统中的一个重要特征。其他策略,如最近最少频繁使用(LRFU)包含LRU和LFU策略,但需要调优参数,并且在动态网络流量条件下可能并不总是表现最佳。本文在ndnSIM模拟器上实现了一种抗扫描、自调优、LRU和LFU包含缓存替换策略的自适应替换缓存(ARC)算法,并比较了ARC与LRU替换策略的命中率性能。由于内容库的大小影响NDN的整体性能,我们已经通过仿真证明,ARC需要比LRU更小的内容库大小。我们通过改变网格拓扑、内容存储和利率大小进行了模拟研究。仿真结果表明,ARC替换策略的命中率比LRU替换策略高4%。我们还观察到,ARC需要比LRU更小的内容存储规模才能达到74%的命中率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信