Packet processing with blocking for bursty traffic on multi-thread network processor

Yeim-Kuan Chang, Fang-Chen Kuo
{"title":"Packet processing with blocking for bursty traffic on multi-thread network processor","authors":"Yeim-Kuan Chang, Fang-Chen Kuo","doi":"10.1109/HPSR.2009.5307419","DOIUrl":null,"url":null,"abstract":"It is well-known that there are bursty accesses in network traffic. It means a burst of packets with the same meaningful headers are usually received by routers at the same time. With such traffic, routers usually perform the same computations and access the same memory location repeatedly. To utilize this characteristic of network traffic, many cache schemes are proposed to deal with the bursty access patterns. However, in the multi-thread network processor based routers, the existing cache schemes will not suit to the bursty traffic. Since all threads may all deal with the packets with the same headers, if the former threads do not update the cache entries yet, the subsequent threads still have to repeat the computations due to the cache miss. In this paper, we propose a cache scheme called B-cache for the multi-thread network processors. B-cache blocks the subsequent threads from doing the same computations which are being processed by the former thread. By applying B-cache, any packet processing tasks with high locality characteristic, such as IP address lookup, packet classification, and intrusion detection, can avoid the duplicate computations and hence achieve a better packet processing rate. We implement the proposed B-cache scheme on Intel IXP2400 network processor, the experimental results shows that our B-cache scheme can achieves the line speed of Intel IXP2400.","PeriodicalId":251545,"journal":{"name":"2009 International Conference on High Performance Switching and Routing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 International Conference on High Performance Switching and Routing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPSR.2009.5307419","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

It is well-known that there are bursty accesses in network traffic. It means a burst of packets with the same meaningful headers are usually received by routers at the same time. With such traffic, routers usually perform the same computations and access the same memory location repeatedly. To utilize this characteristic of network traffic, many cache schemes are proposed to deal with the bursty access patterns. However, in the multi-thread network processor based routers, the existing cache schemes will not suit to the bursty traffic. Since all threads may all deal with the packets with the same headers, if the former threads do not update the cache entries yet, the subsequent threads still have to repeat the computations due to the cache miss. In this paper, we propose a cache scheme called B-cache for the multi-thread network processors. B-cache blocks the subsequent threads from doing the same computations which are being processed by the former thread. By applying B-cache, any packet processing tasks with high locality characteristic, such as IP address lookup, packet classification, and intrusion detection, can avoid the duplicate computations and hence achieve a better packet processing rate. We implement the proposed B-cache scheme on Intel IXP2400 network processor, the experimental results shows that our B-cache scheme can achieves the line speed of Intel IXP2400.
多线程网络处理器上突发流量的分组阻塞处理
众所周知,在网络流量中存在突发访问。这意味着路由器通常同时接收到具有相同有意义的报头的数据包。对于这样的流量,路由器通常执行相同的计算并重复访问相同的内存位置。为了利用网络流量的这一特性,提出了许多缓存方案来处理突发访问模式。然而,在基于多线程网络处理器的路由器中,现有的缓存方案不适合突发流量。由于所有线程可能都处理具有相同报头的数据包,如果前一个线程还没有更新缓存条目,则后续线程由于缓存遗漏而仍然需要重复计算。本文提出了一种针对多线程网络处理器的缓存方案,称为B-cache。B-cache阻止后续线程执行前一个线程正在处理的相同计算。通过B-cache的应用,对于IP地址查找、报文分类、入侵检测等具有高度局部性特征的报文处理任务,可以避免重复计算,从而获得更高的报文处理速率。我们在Intel IXP2400网络处理器上实现了所提出的B-cache方案,实验结果表明我们的B-cache方案能够达到Intel IXP2400的线速。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信