{"title":"Analysis of a memory architecture for fast packet buffers","authors":"Sundar Iyer, Ramana Rao Kompella, N. McKeown","doi":"10.1109/HPSR.2001.923663","DOIUrl":null,"url":null,"abstract":"An packet switches contain packet buffers to hold packets during times of congestion. The capacity of a high performance router is often dictated by the speed of its packet buffers. This is particularly true for a shared memory switch where the memory needs to operate at N times the line rate, where N is the number of ports in the system. Even input queued switches must be able to buffer packets at the rate at which they arrive. Therefore, as the link rates increase memory bandwidth requirements grow. With today's DRAM technology and for an OC192c (10 Gb/s) link, it is barely possible to write packets to (read packets from) memory at the rate at which they arrive (depart). As link rates increase, the problem will get harder. There are several techniques for building faster packet buffers, based on ideas from computer architecture such as memory interleaving and banking. While not directly applicable to packet switches, they form the basis of several techniques in use today. We consider one particular packet buffer architecture consisting of large, slow, low cost, DRAMs coupled with a small, fast SRAM \"buffer\". We describe and analyze a memory management algorithm (ECQF-MMA) for replenishing the cache and find a bound on the size of the SRAM.","PeriodicalId":308964,"journal":{"name":"2001 IEEE Workshop on High Performance Switching and Routing (IEEE Cat. No.01TH8552)","volume":"313 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"91","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2001 IEEE Workshop on High Performance Switching and Routing (IEEE Cat. No.01TH8552)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPSR.2001.923663","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 91
Abstract
An packet switches contain packet buffers to hold packets during times of congestion. The capacity of a high performance router is often dictated by the speed of its packet buffers. This is particularly true for a shared memory switch where the memory needs to operate at N times the line rate, where N is the number of ports in the system. Even input queued switches must be able to buffer packets at the rate at which they arrive. Therefore, as the link rates increase memory bandwidth requirements grow. With today's DRAM technology and for an OC192c (10 Gb/s) link, it is barely possible to write packets to (read packets from) memory at the rate at which they arrive (depart). As link rates increase, the problem will get harder. There are several techniques for building faster packet buffers, based on ideas from computer architecture such as memory interleaving and banking. While not directly applicable to packet switches, they form the basis of several techniques in use today. We consider one particular packet buffer architecture consisting of large, slow, low cost, DRAMs coupled with a small, fast SRAM "buffer". We describe and analyze a memory management algorithm (ECQF-MMA) for replenishing the cache and find a bound on the size of the SRAM.