分析一种用于快速数据包缓冲的存储器结构

Sundar Iyer, Ramana Rao Kompella, N. McKeown
{"title":"分析一种用于快速数据包缓冲的存储器结构","authors":"Sundar Iyer, Ramana Rao Kompella, N. McKeown","doi":"10.1109/HPSR.2001.923663","DOIUrl":null,"url":null,"abstract":"An packet switches contain packet buffers to hold packets during times of congestion. The capacity of a high performance router is often dictated by the speed of its packet buffers. This is particularly true for a shared memory switch where the memory needs to operate at N times the line rate, where N is the number of ports in the system. Even input queued switches must be able to buffer packets at the rate at which they arrive. Therefore, as the link rates increase memory bandwidth requirements grow. With today's DRAM technology and for an OC192c (10 Gb/s) link, it is barely possible to write packets to (read packets from) memory at the rate at which they arrive (depart). As link rates increase, the problem will get harder. There are several techniques for building faster packet buffers, based on ideas from computer architecture such as memory interleaving and banking. While not directly applicable to packet switches, they form the basis of several techniques in use today. We consider one particular packet buffer architecture consisting of large, slow, low cost, DRAMs coupled with a small, fast SRAM \"buffer\". We describe and analyze a memory management algorithm (ECQF-MMA) for replenishing the cache and find a bound on the size of the SRAM.","PeriodicalId":308964,"journal":{"name":"2001 IEEE Workshop on High Performance Switching and Routing (IEEE Cat. No.01TH8552)","volume":"313 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"91","resultStr":"{\"title\":\"Analysis of a memory architecture for fast packet buffers\",\"authors\":\"Sundar Iyer, Ramana Rao Kompella, N. McKeown\",\"doi\":\"10.1109/HPSR.2001.923663\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"An packet switches contain packet buffers to hold packets during times of congestion. The capacity of a high performance router is often dictated by the speed of its packet buffers. This is particularly true for a shared memory switch where the memory needs to operate at N times the line rate, where N is the number of ports in the system. Even input queued switches must be able to buffer packets at the rate at which they arrive. Therefore, as the link rates increase memory bandwidth requirements grow. With today's DRAM technology and for an OC192c (10 Gb/s) link, it is barely possible to write packets to (read packets from) memory at the rate at which they arrive (depart). As link rates increase, the problem will get harder. There are several techniques for building faster packet buffers, based on ideas from computer architecture such as memory interleaving and banking. While not directly applicable to packet switches, they form the basis of several techniques in use today. We consider one particular packet buffer architecture consisting of large, slow, low cost, DRAMs coupled with a small, fast SRAM \\\"buffer\\\". We describe and analyze a memory management algorithm (ECQF-MMA) for replenishing the cache and find a bound on the size of the SRAM.\",\"PeriodicalId\":308964,\"journal\":{\"name\":\"2001 IEEE Workshop on High Performance Switching and Routing (IEEE Cat. No.01TH8552)\",\"volume\":\"313 \",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2001-05-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"91\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2001 IEEE Workshop on High Performance Switching and Routing (IEEE Cat. No.01TH8552)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPSR.2001.923663\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2001 IEEE Workshop on High Performance Switching and Routing (IEEE Cat. No.01TH8552)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPSR.2001.923663","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 91

摘要

数据包交换机包含数据包缓冲区,以便在拥塞期间保存数据包。高性能路由器的容量通常由其数据包缓冲区的速度决定。对于共享内存交换机来说尤其如此,其中内存需要以N倍的线路速率运行,其中N是系统中的端口数。即使是输入队列交换机也必须能够以数据包到达的速率缓冲数据包。因此,随着链路速率的增加,对内存带宽的要求也会增加。使用今天的DRAM技术和OC192c (10gb /s)链路,几乎不可能以它们到达(离开)的速率向内存写入(从内存读取)数据包。随着链接率的提高,这个问题将变得更加棘手。有几种技术可以构建更快的数据包缓冲区,这些技术基于计算机体系结构的思想,如内存交错和银行。虽然不能直接应用于分组交换机,但它们构成了目前使用的几种技术的基础。我们考虑一种特殊的数据包缓冲架构,它由大、慢、低成本的dram和一个小、快的SRAM“缓冲区”组成。我们描述并分析了一种用于补充缓存的内存管理算法(ECQF-MMA),并找到了SRAM大小的界限。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Analysis of a memory architecture for fast packet buffers
An packet switches contain packet buffers to hold packets during times of congestion. The capacity of a high performance router is often dictated by the speed of its packet buffers. This is particularly true for a shared memory switch where the memory needs to operate at N times the line rate, where N is the number of ports in the system. Even input queued switches must be able to buffer packets at the rate at which they arrive. Therefore, as the link rates increase memory bandwidth requirements grow. With today's DRAM technology and for an OC192c (10 Gb/s) link, it is barely possible to write packets to (read packets from) memory at the rate at which they arrive (depart). As link rates increase, the problem will get harder. There are several techniques for building faster packet buffers, based on ideas from computer architecture such as memory interleaving and banking. While not directly applicable to packet switches, they form the basis of several techniques in use today. We consider one particular packet buffer architecture consisting of large, slow, low cost, DRAMs coupled with a small, fast SRAM "buffer". We describe and analyze a memory management algorithm (ECQF-MMA) for replenishing the cache and find a bound on the size of the SRAM.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信