Buffer allocation for advanced packet segmentation in Network Processors

Daniel Llorente, Kimon Karras, Thomas Wild, A. Herkersdorf
{"title":"Buffer allocation for advanced packet segmentation in Network Processors","authors":"Daniel Llorente, Kimon Karras, Thomas Wild, A. Herkersdorf","doi":"10.1109/ASAP.2008.4580182","DOIUrl":null,"url":null,"abstract":"In current network processors, incoming variable-length packets are sliced using only one small segment size and then stored in the buffer. Inconveniently, short data bursts are inadequate for accessing SDRAM, commonly used for packet buffers, due to high activation and pre-charging latencies. Using large segment sizes is not optimal either because though it increases memory bandwidth, the benefit comes at the price of a heavy reduction in storing efficiency. A good solution to achieve simultaneously high performance and memory utilization consists in storing a single packet segmented using multiple segment sizes. In this paper, we study how to allocate memory for these different-sized segments in an efficient way. First we analyze the appropriate segment pool size for a multitude of traffic scenarios. Our experiments show that simple static buffer allocation does not always suffice as different segment pools may be exhausted depending on traffic. Hence we introduce a method for handling multiple segment pools not only in a static but also in a dynamic way, taking advantage of a new set of control structures based on a combination of bitmaps and linked lists. We demonstrate that our method achieves a huge reduction in control buffer size requirements in comparison to state-of-the-art control structures, together with decreasing the average number of accesses to control data.","PeriodicalId":246715,"journal":{"name":"2008 International Conference on Application-Specific Systems, Architectures and Processors","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 International Conference on Application-Specific Systems, Architectures and Processors","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASAP.2008.4580182","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In current network processors, incoming variable-length packets are sliced using only one small segment size and then stored in the buffer. Inconveniently, short data bursts are inadequate for accessing SDRAM, commonly used for packet buffers, due to high activation and pre-charging latencies. Using large segment sizes is not optimal either because though it increases memory bandwidth, the benefit comes at the price of a heavy reduction in storing efficiency. A good solution to achieve simultaneously high performance and memory utilization consists in storing a single packet segmented using multiple segment sizes. In this paper, we study how to allocate memory for these different-sized segments in an efficient way. First we analyze the appropriate segment pool size for a multitude of traffic scenarios. Our experiments show that simple static buffer allocation does not always suffice as different segment pools may be exhausted depending on traffic. Hence we introduce a method for handling multiple segment pools not only in a static but also in a dynamic way, taking advantage of a new set of control structures based on a combination of bitmaps and linked lists. We demonstrate that our method achieves a huge reduction in control buffer size requirements in comparison to state-of-the-art control structures, together with decreasing the average number of accesses to control data.
网络处理器中用于高级数据包分段的缓冲区分配
在当前的网络处理器中,传入的可变长度数据包仅使用一个小段大小进行切片,然后存储在缓冲区中。不方便的是,由于高激活和预充电延迟,短数据爆发不足以访问通常用于数据包缓冲区的SDRAM。使用大的段大小也不是最优的,因为尽管它增加了内存带宽,但其代价是存储效率的严重降低。同时实现高性能和内存利用率的一个好的解决方案是存储使用多个段大小分段的单个数据包。在本文中,我们研究了如何有效地为这些不同大小的段分配内存。首先,我们分析了适合多种流量场景的段池大小。我们的实验表明,简单的静态缓冲分配并不总是足够的,因为不同的段池可能会根据流量耗尽。因此,我们引入了一种既可以静态又可以动态地处理多个段池的方法,该方法利用了一套基于位图和链表组合的新的控制结构。我们证明,与最先进的控制结构相比,我们的方法大大减少了对控制缓冲区大小的要求,同时减少了对控制数据的平均访问次数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信