{"title":"Coded Network Switches for Improved Throughput","authors":"Rami Cohen, Yuval Cassuto","doi":"10.1145/2928275.2933281","DOIUrl":null,"url":null,"abstract":"With the increasing demand for network bandwidth, network switches face the challenge of serving growing data rates. To parallelize the process of writing and reading packets to the switch memory, multiple memory units (MUs) are deployed in parallel in the switch fabric. However, memory contention may occur if packets requested to read happen to share one or more MUs, due to memory bandwidth limitations. Avoiding such contention in the write stage is limited as the reading schedule of packets is not known upon arrival of the packets to the switch. Thus, efficient packet placement and read policies are required. For greater flexibility in the read process, coded switches introduce redundancy to the packet-write path. This is done by calculating additional coded chunks from an incoming packet, and writing them along with the original packet chunks to MUs in the switch memory. A coding scheme takes an input of k packet chunks and encodes them into a codeword of n chunks (k ≤ n), where the redundant n --- k chunks are aimed at providing improved read flexibility. Thanks to the redundancy, only a subset of the coded chunks is required for reconstructing the original (uncoded) packet. Thus, packets may be read even when only a part of their chunks is available to read without contention. One natural coding approach is to use [n, k] maximum distance separable (MDS) codes, which have the attractive property that any k chunks taken from the n code chunks can be used for the recovery of the original k packet chunks. Although MDS codes provide the maximum flexibility, we show in our results that good switching performance can be obtained even with much weaker (and lower cost) codes, such as binary cyclic codes. Previous switch-coding works [1],[2] considered a stronger (and more costly) model guaranteeing simultaneous reconstruction of worst-case packet requests. In the coded switching paradigm we propose, our objective is to maximize the number of full packets read from the switch memory simultaneously in a read cycle. The packets to read at each read cycle are specified in a request issued by the control plane of the switch. We show that coding the packets upon their write can significantly increase the number of read packets, in return to a small increase in the write load to store the redundancy. Thus coding can significantly increase the overall switching throughput. We identify and study two key components for high-throughput coded switches: 1) Read algorithms that can recover the maximal number of packets given an arbitrary request for previously written packets, and 2) Placement policies determining how coded chunks are placed in the switch MUs. Our results contribute art and insight for each of these two components, and more importantly, they reveal the tight relations between them. At a high level, the choice of placement policy can improve both the performance and the computational efficiency of the read algorithm. To show the former, we derive a collection of analysis tools to calculate and/or bound the performance of a read algorithm given the placement policy in use. For the latter, we show a huge gap between an NP-hard optimal read problem for one policy (uniform placement), and extremely efficient optimal read algorithms for two others (cyclic and design placements).","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"82 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 9th ACM International on Systems and Storage Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2928275.2933281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the increasing demand for network bandwidth, network switches face the challenge of serving growing data rates. To parallelize the process of writing and reading packets to the switch memory, multiple memory units (MUs) are deployed in parallel in the switch fabric. However, memory contention may occur if packets requested to read happen to share one or more MUs, due to memory bandwidth limitations. Avoiding such contention in the write stage is limited as the reading schedule of packets is not known upon arrival of the packets to the switch. Thus, efficient packet placement and read policies are required. For greater flexibility in the read process, coded switches introduce redundancy to the packet-write path. This is done by calculating additional coded chunks from an incoming packet, and writing them along with the original packet chunks to MUs in the switch memory. A coding scheme takes an input of k packet chunks and encodes them into a codeword of n chunks (k ≤ n), where the redundant n --- k chunks are aimed at providing improved read flexibility. Thanks to the redundancy, only a subset of the coded chunks is required for reconstructing the original (uncoded) packet. Thus, packets may be read even when only a part of their chunks is available to read without contention. One natural coding approach is to use [n, k] maximum distance separable (MDS) codes, which have the attractive property that any k chunks taken from the n code chunks can be used for the recovery of the original k packet chunks. Although MDS codes provide the maximum flexibility, we show in our results that good switching performance can be obtained even with much weaker (and lower cost) codes, such as binary cyclic codes. Previous switch-coding works [1],[2] considered a stronger (and more costly) model guaranteeing simultaneous reconstruction of worst-case packet requests. In the coded switching paradigm we propose, our objective is to maximize the number of full packets read from the switch memory simultaneously in a read cycle. The packets to read at each read cycle are specified in a request issued by the control plane of the switch. We show that coding the packets upon their write can significantly increase the number of read packets, in return to a small increase in the write load to store the redundancy. Thus coding can significantly increase the overall switching throughput. We identify and study two key components for high-throughput coded switches: 1) Read algorithms that can recover the maximal number of packets given an arbitrary request for previously written packets, and 2) Placement policies determining how coded chunks are placed in the switch MUs. Our results contribute art and insight for each of these two components, and more importantly, they reveal the tight relations between them. At a high level, the choice of placement policy can improve both the performance and the computational efficiency of the read algorithm. To show the former, we derive a collection of analysis tools to calculate and/or bound the performance of a read algorithm given the placement policy in use. For the latter, we show a huge gap between an NP-hard optimal read problem for one policy (uniform placement), and extremely efficient optimal read algorithms for two others (cyclic and design placements).