{"title":"具有确定性包偏离的快速缓冲存储器","authors":"Mayank Kabra, Siddharth Saha, Bill Lin","doi":"10.1109/HOTI.2006.13","DOIUrl":null,"url":null,"abstract":"High-performance routers need to store temporarily a large number of packets in response to congestion. DRAM is typically used to implement the needed packet buffers, but DRAM devices are too slow to match the bandwidth requirements. To bridge the bandwidth gap, a number of hybrid SRAM/DRAM packet buffer architectures have been proposed (S. Iyer and N. Mckeown, 2002) (S. Kumar et al., 2005). These packet buffer architectures assume a very general model where the buffer consists of many logically separated FIFO queues that may be accessed in random order. For example, virtual output queues (VOQs) are used in crossbar routers, where each VOQ corresponds to a logical queue corresponding to a particular output. Depending on the scheduling algorithm used, the access pattern to these logical queues may indeed be at random. However, for a number of router architectures, this worst-case random access assumption is unnecessary since packet departure times are deterministic. One architecture is the switch-memory-switch router architecture (A. Prakash et al., 2002) (S. Iyer et al., 2002) that efficiently mimics an output queueing switch. Another architecture is the load-balanced router architecture (C.S. Chang et al., 2002) (I. Keslassy et al., 2003) that has interesting scalability properties. In these architectures, for best-effort routing, the departure times of packets can be deterministically calculated before inserting packets into packet buffers. In this paper, we describe a novel packet buffer architecture based on interleaved memories that takes advantage of the known packet departure times to achieve simplicity and determinism. The number of interleaved DRAM banks required to implement the proposed packet buffer architecture is independent of the number of logical queues, yet the proposed architecture can achieve the performance of an SRAM implementation","PeriodicalId":288349,"journal":{"name":"14th IEEE Symposium on High-Performance Interconnects (HOTI'06)","volume":"88 8","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Fast Buffer Memory with Deterministic Packet Departures\",\"authors\":\"Mayank Kabra, Siddharth Saha, Bill Lin\",\"doi\":\"10.1109/HOTI.2006.13\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"High-performance routers need to store temporarily a large number of packets in response to congestion. DRAM is typically used to implement the needed packet buffers, but DRAM devices are too slow to match the bandwidth requirements. To bridge the bandwidth gap, a number of hybrid SRAM/DRAM packet buffer architectures have been proposed (S. Iyer and N. Mckeown, 2002) (S. Kumar et al., 2005). These packet buffer architectures assume a very general model where the buffer consists of many logically separated FIFO queues that may be accessed in random order. For example, virtual output queues (VOQs) are used in crossbar routers, where each VOQ corresponds to a logical queue corresponding to a particular output. Depending on the scheduling algorithm used, the access pattern to these logical queues may indeed be at random. However, for a number of router architectures, this worst-case random access assumption is unnecessary since packet departure times are deterministic. One architecture is the switch-memory-switch router architecture (A. Prakash et al., 2002) (S. Iyer et al., 2002) that efficiently mimics an output queueing switch. Another architecture is the load-balanced router architecture (C.S. Chang et al., 2002) (I. Keslassy et al., 2003) that has interesting scalability properties. In these architectures, for best-effort routing, the departure times of packets can be deterministically calculated before inserting packets into packet buffers. In this paper, we describe a novel packet buffer architecture based on interleaved memories that takes advantage of the known packet departure times to achieve simplicity and determinism. The number of interleaved DRAM banks required to implement the proposed packet buffer architecture is independent of the number of logical queues, yet the proposed architecture can achieve the performance of an SRAM implementation\",\"PeriodicalId\":288349,\"journal\":{\"name\":\"14th IEEE Symposium on High-Performance Interconnects (HOTI'06)\",\"volume\":\"88 8\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"14th IEEE Symposium on High-Performance Interconnects (HOTI'06)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HOTI.2006.13\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"14th IEEE Symposium on High-Performance Interconnects (HOTI'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HOTI.2006.13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
摘要
高性能路由器需要临时存储大量报文以应对拥塞。DRAM通常用于实现所需的数据包缓冲区,但是DRAM设备太慢,无法满足带宽需求。为了弥补带宽差距,已经提出了许多混合SRAM/DRAM数据包缓冲架构(S. Iyer和N. Mckeown, 2002) (S. Kumar等人,2005)。这些包缓冲体系结构假设一个非常通用的模型,其中缓冲区由许多逻辑上分离的FIFO队列组成,这些队列可以按随机顺序访问。例如,在crossbar路由器中使用虚拟输出队列(VOQ),其中每个VOQ对应于与特定输出对应的逻辑队列。根据所使用的调度算法,这些逻辑队列的访问模式可能确实是随机的。然而,对于许多路由器体系结构,这种最坏情况随机访问假设是不必要的,因为数据包的出发时间是确定的。一种架构是开关-内存-开关路由器架构(A. Prakash等人,2002)(S. Iyer等人,2002),它有效地模拟了输出队列交换机。另一种架构是负载均衡路由器架构(C.S. Chang et al., 2002) (I. Keslassy et al., 2003),它具有有趣的可扩展性。在这些体系结构中,对于“尽力而为”路由,可以在将数据包插入数据包缓冲区之前确定地计算数据包的出发时间。在本文中,我们描述了一种新的基于交错存储器的数据包缓冲结构,该结构利用已知的数据包出发时间来实现简单性和确定性。实现所提出的数据包缓冲体系结构所需的交错DRAM组的数量与逻辑队列的数量无关,但所提出的体系结构可以实现SRAM实现的性能
Fast Buffer Memory with Deterministic Packet Departures
High-performance routers need to store temporarily a large number of packets in response to congestion. DRAM is typically used to implement the needed packet buffers, but DRAM devices are too slow to match the bandwidth requirements. To bridge the bandwidth gap, a number of hybrid SRAM/DRAM packet buffer architectures have been proposed (S. Iyer and N. Mckeown, 2002) (S. Kumar et al., 2005). These packet buffer architectures assume a very general model where the buffer consists of many logically separated FIFO queues that may be accessed in random order. For example, virtual output queues (VOQs) are used in crossbar routers, where each VOQ corresponds to a logical queue corresponding to a particular output. Depending on the scheduling algorithm used, the access pattern to these logical queues may indeed be at random. However, for a number of router architectures, this worst-case random access assumption is unnecessary since packet departure times are deterministic. One architecture is the switch-memory-switch router architecture (A. Prakash et al., 2002) (S. Iyer et al., 2002) that efficiently mimics an output queueing switch. Another architecture is the load-balanced router architecture (C.S. Chang et al., 2002) (I. Keslassy et al., 2003) that has interesting scalability properties. In these architectures, for best-effort routing, the departure times of packets can be deterministically calculated before inserting packets into packet buffers. In this paper, we describe a novel packet buffer architecture based on interleaved memories that takes advantage of the known packet departure times to achieve simplicity and determinism. The number of interleaved DRAM banks required to implement the proposed packet buffer architecture is independent of the number of logical queues, yet the proposed architecture can achieve the performance of an SRAM implementation