Coded Network Switches for Improved Throughput

Rami Cohen, Yuval Cassuto
{"title":"Coded Network Switches for Improved Throughput","authors":"Rami Cohen, Yuval Cassuto","doi":"10.1145/2928275.2933281","DOIUrl":null,"url":null,"abstract":"With the increasing demand for network bandwidth, network switches face the challenge of serving growing data rates. To parallelize the process of writing and reading packets to the switch memory, multiple memory units (MUs) are deployed in parallel in the switch fabric. However, memory contention may occur if packets requested to read happen to share one or more MUs, due to memory bandwidth limitations. Avoiding such contention in the write stage is limited as the reading schedule of packets is not known upon arrival of the packets to the switch. Thus, efficient packet placement and read policies are required. For greater flexibility in the read process, coded switches introduce redundancy to the packet-write path. This is done by calculating additional coded chunks from an incoming packet, and writing them along with the original packet chunks to MUs in the switch memory. A coding scheme takes an input of k packet chunks and encodes them into a codeword of n chunks (k ≤ n), where the redundant n --- k chunks are aimed at providing improved read flexibility. Thanks to the redundancy, only a subset of the coded chunks is required for reconstructing the original (uncoded) packet. Thus, packets may be read even when only a part of their chunks is available to read without contention. One natural coding approach is to use [n, k] maximum distance separable (MDS) codes, which have the attractive property that any k chunks taken from the n code chunks can be used for the recovery of the original k packet chunks. Although MDS codes provide the maximum flexibility, we show in our results that good switching performance can be obtained even with much weaker (and lower cost) codes, such as binary cyclic codes. Previous switch-coding works [1],[2] considered a stronger (and more costly) model guaranteeing simultaneous reconstruction of worst-case packet requests. In the coded switching paradigm we propose, our objective is to maximize the number of full packets read from the switch memory simultaneously in a read cycle. The packets to read at each read cycle are specified in a request issued by the control plane of the switch. We show that coding the packets upon their write can significantly increase the number of read packets, in return to a small increase in the write load to store the redundancy. Thus coding can significantly increase the overall switching throughput. We identify and study two key components for high-throughput coded switches: 1) Read algorithms that can recover the maximal number of packets given an arbitrary request for previously written packets, and 2) Placement policies determining how coded chunks are placed in the switch MUs. Our results contribute art and insight for each of these two components, and more importantly, they reveal the tight relations between them. At a high level, the choice of placement policy can improve both the performance and the computational efficiency of the read algorithm. To show the former, we derive a collection of analysis tools to calculate and/or bound the performance of a read algorithm given the placement policy in use. For the latter, we show a huge gap between an NP-hard optimal read problem for one policy (uniform placement), and extremely efficient optimal read algorithms for two others (cyclic and design placements).","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"82 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 9th ACM International on Systems and Storage Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2928275.2933281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the increasing demand for network bandwidth, network switches face the challenge of serving growing data rates. To parallelize the process of writing and reading packets to the switch memory, multiple memory units (MUs) are deployed in parallel in the switch fabric. However, memory contention may occur if packets requested to read happen to share one or more MUs, due to memory bandwidth limitations. Avoiding such contention in the write stage is limited as the reading schedule of packets is not known upon arrival of the packets to the switch. Thus, efficient packet placement and read policies are required. For greater flexibility in the read process, coded switches introduce redundancy to the packet-write path. This is done by calculating additional coded chunks from an incoming packet, and writing them along with the original packet chunks to MUs in the switch memory. A coding scheme takes an input of k packet chunks and encodes them into a codeword of n chunks (k ≤ n), where the redundant n --- k chunks are aimed at providing improved read flexibility. Thanks to the redundancy, only a subset of the coded chunks is required for reconstructing the original (uncoded) packet. Thus, packets may be read even when only a part of their chunks is available to read without contention. One natural coding approach is to use [n, k] maximum distance separable (MDS) codes, which have the attractive property that any k chunks taken from the n code chunks can be used for the recovery of the original k packet chunks. Although MDS codes provide the maximum flexibility, we show in our results that good switching performance can be obtained even with much weaker (and lower cost) codes, such as binary cyclic codes. Previous switch-coding works [1],[2] considered a stronger (and more costly) model guaranteeing simultaneous reconstruction of worst-case packet requests. In the coded switching paradigm we propose, our objective is to maximize the number of full packets read from the switch memory simultaneously in a read cycle. The packets to read at each read cycle are specified in a request issued by the control plane of the switch. We show that coding the packets upon their write can significantly increase the number of read packets, in return to a small increase in the write load to store the redundancy. Thus coding can significantly increase the overall switching throughput. We identify and study two key components for high-throughput coded switches: 1) Read algorithms that can recover the maximal number of packets given an arbitrary request for previously written packets, and 2) Placement policies determining how coded chunks are placed in the switch MUs. Our results contribute art and insight for each of these two components, and more importantly, they reveal the tight relations between them. At a high level, the choice of placement policy can improve both the performance and the computational efficiency of the read algorithm. To show the former, we derive a collection of analysis tools to calculate and/or bound the performance of a read algorithm given the placement policy in use. For the latter, we show a huge gap between an NP-hard optimal read problem for one policy (uniform placement), and extremely efficient optimal read algorithms for two others (cyclic and design placements).
改进吞吐量的编码网络交换机
随着网络带宽需求的不断增长,网络交换机面临着数据速率不断增长的挑战。为了使向交换机内存写入和读取数据包的过程并行化,在交换结构中并行部署多个内存单元(mu)。但是,由于内存带宽限制,如果请求读取的数据包碰巧共享一个或多个mu,则可能发生内存争用。在写阶段避免这种争用是有限的,因为数据包到达交换机时不知道数据包的读取计划。因此,需要有效的数据包放置和读取策略。为了在读取过程中获得更大的灵活性,编码交换机在数据包写入路径中引入了冗余。这是通过从传入数据包中计算额外的编码块,并将它们与原始数据包块一起写入交换机内存中的mu来完成的。编码方案以k个数据包块为输入,并将其编码为n个块(k≤n)的码字,其中冗余的n—k个块旨在提供改进的读取灵活性。由于冗余,重构原始(未编码)数据包只需要编码块的一个子集。因此,即使数据包的数据块只有一部分可以读取,也可以读取。一种自然的编码方法是使用[n, k]最大距离可分离(MDS)码,它具有吸引人的特性,即从n个码块中取出的任何k个块都可以用于恢复原始的k个数据包块。虽然MDS码提供了最大的灵活性,但我们的结果表明,即使使用更弱(和更低成本)的码,如二进制循环码,也可以获得良好的切换性能。以前的交换编码工作[1],[2]考虑了一个更强(和更昂贵)的模型,保证同时重建最坏情况的数据包请求。在我们提出的编码交换范例中,我们的目标是在一个读取周期内同时从交换机存储器读取完整数据包的数量最大化。在每个读取周期中要读取的数据包在交换机控制平面发出的请求中指定。我们表明,在写数据包时对其进行编码可以显著增加读数据包的数量,从而使存储冗余的写负载略有增加。因此,编码可以显著提高总体交换吞吐量。我们确定并研究了高吞吐量编码交换机的两个关键组件:1)读取算法,可以在给定先前写入数据包的任意请求的情况下恢复最大数量的数据包,以及2)放置策略确定如何将编码块放置在交换机mu中。我们的结果为这两个组成部分提供了艺术和洞察力,更重要的是,它们揭示了它们之间的紧密关系。在较高的层次上,放置策略的选择可以提高读算法的性能和计算效率。为了显示前者,我们推导了一组分析工具来计算和/或绑定给定使用的放置策略的读取算法的性能。对于后者,我们展示了一个策略(均匀放置)的NP-hard最优读取问题与另外两个策略(循环和设计放置)的极其有效的最优读取算法之间的巨大差距。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信