Line Replacement Algorithm for L1-scale Packet Processing Cache

Hayato Yamaki, H. Nishi
{"title":"Line Replacement Algorithm for L1-scale Packet Processing Cache","authors":"Hayato Yamaki, H. Nishi","doi":"10.1145/3004010.3006379","DOIUrl":null,"url":null,"abstract":"It will become a serious problem to increase power consumption of routers resulting from explosive increase of network traffics caused by IoT data, big data, and so on. Table lookups in packet processing are known as a bottleneck of the router from the points of both processing performance and power consumption. Packet Processing Cache (PPC), which accelerates the table lookups and reduces the power consumption of them by using cache mechanism, was proposed. However, it is difficult for PPC to obtain high cache hit rate because the size of PPC should be small, such as a L1 cache of processors, to get higher access speed. For this reason, an effective line replacement algorithm was considered in this study for reducing a cache miss without increasing the cache size. First, defects of applying typical line replacement algorithms to PPC were examined. Secondly, two algorithms, LRU Insertion Policy (LIP) and Elevator Cache (ELC), and improved algorithms of LIP and ELC called LIP1, LIP2, ELC1, and ELC2 were considered for improving the above defects. In simulation, it was shown Elevator Cache could reduce the cache miss by at most 17.4% compared with Least Recently Used, which applied to many cache systems.","PeriodicalId":406787,"journal":{"name":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","volume":"87 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing Networking and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3004010.3006379","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

It will become a serious problem to increase power consumption of routers resulting from explosive increase of network traffics caused by IoT data, big data, and so on. Table lookups in packet processing are known as a bottleneck of the router from the points of both processing performance and power consumption. Packet Processing Cache (PPC), which accelerates the table lookups and reduces the power consumption of them by using cache mechanism, was proposed. However, it is difficult for PPC to obtain high cache hit rate because the size of PPC should be small, such as a L1 cache of processors, to get higher access speed. For this reason, an effective line replacement algorithm was considered in this study for reducing a cache miss without increasing the cache size. First, defects of applying typical line replacement algorithms to PPC were examined. Secondly, two algorithms, LRU Insertion Policy (LIP) and Elevator Cache (ELC), and improved algorithms of LIP and ELC called LIP1, LIP2, ELC1, and ELC2 were considered for improving the above defects. In simulation, it was shown Elevator Cache could reduce the cache miss by at most 17.4% compared with Least Recently Used, which applied to many cache systems.
l1级包处理缓存的线路替换算法
物联网数据、大数据等带来的网络流量爆发式增长,导致路由器功耗的增加将成为一个严重的问题。从处理性能和功耗的角度来看,分组处理中的表查找被称为路由器的瓶颈。提出了包处理缓存(Packet Processing Cache, PPC),利用缓存机制加快了表查找速度,降低了表查找的功耗。但是,PPC很难获得高的缓存命中率,因为PPC的大小应该很小,例如处理器的L1缓存,以获得更高的访问速度。因此,本研究考虑了一种有效的行替换算法,在不增加缓存大小的情况下减少缓存缺失。首先,分析了典型的线路替换算法在PPC中的缺陷。其次,考虑LRU插入策略(LIP)和提升缓存(ELC)两种算法,以及LIP和ELC的改进算法LIP1、LIP2、ELC1和ELC2来改善上述缺陷。仿真结果表明,与Least Recently Used相比,电梯缓存最多可减少17.4%的缓存丢失,这适用于许多缓存系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信