使用Intel千兆网卡的Linux集群中断节流率对性能的影响

Baris Guler, R. Radhakrishnan, Ronald Pepper
{"title":"使用Intel千兆网卡的Linux集群中断节流率对性能的影响","authors":"Baris Guler, R. Radhakrishnan, Ronald Pepper","doi":"10.1109/CLUSTR.2005.347089","DOIUrl":null,"url":null,"abstract":"Summary form only given. Many high performance computing clusters (HPCC) are still built using gigabit Ethernet as the interconnect connecting all the computing nodes even though there are faster (lower latency and higher bandwidth) alternatives such as Infiniband and Myrinet. The choice of interconnect mainly depends on the parallel application communication characteristics as well as budget requirements since the faster alternatives are much more expensive compared to gigabit Ethernet especially at lower node counts. Some applications require lower latency interconnect since they communicate more frequently but send relatively small messages, and others can be sending infrequent but large messages thus requiring a higher bandwidth interconnect. Since PCs, workstations and servers are designed for server-client type of environment, network interface card (NIC) drivers are usually optimized for specific network traffic patterns by using several interrupt moderation techniques/parameters, specifically interrupt throttle rate (ITR). Since in an HPCC environment the parallel application communication characteristics (i.e. network traffic patterns) are usually different than the default setting, an ITR value has to be identified to achieve best overall system performance for each type of application. This poster will present the case for why this is an important area in high-performance computing clusters connected using gigabit interconnects. It will present methodologies to tune the interrupt throttle rate parameter given to the driver to achieve a balance between application and network performance. Performance results on typical applications will be shown on different clusters","PeriodicalId":255312,"journal":{"name":"2005 IEEE International Conference on Cluster Computing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Performance Effects of Interrupt Throttle Rate on Linux Clusters using Intel Gigabit Network Adapters\",\"authors\":\"Baris Guler, R. Radhakrishnan, Ronald Pepper\",\"doi\":\"10.1109/CLUSTR.2005.347089\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Summary form only given. Many high performance computing clusters (HPCC) are still built using gigabit Ethernet as the interconnect connecting all the computing nodes even though there are faster (lower latency and higher bandwidth) alternatives such as Infiniband and Myrinet. The choice of interconnect mainly depends on the parallel application communication characteristics as well as budget requirements since the faster alternatives are much more expensive compared to gigabit Ethernet especially at lower node counts. Some applications require lower latency interconnect since they communicate more frequently but send relatively small messages, and others can be sending infrequent but large messages thus requiring a higher bandwidth interconnect. Since PCs, workstations and servers are designed for server-client type of environment, network interface card (NIC) drivers are usually optimized for specific network traffic patterns by using several interrupt moderation techniques/parameters, specifically interrupt throttle rate (ITR). Since in an HPCC environment the parallel application communication characteristics (i.e. network traffic patterns) are usually different than the default setting, an ITR value has to be identified to achieve best overall system performance for each type of application. This poster will present the case for why this is an important area in high-performance computing clusters connected using gigabit interconnects. It will present methodologies to tune the interrupt throttle rate parameter given to the driver to achieve a balance between application and network performance. Performance results on typical applications will be shown on different clusters\",\"PeriodicalId\":255312,\"journal\":{\"name\":\"2005 IEEE International Conference on Cluster Computing\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2005 IEEE International Conference on Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CLUSTR.2005.347089\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2005 IEEE International Conference on Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CLUSTR.2005.347089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

只提供摘要形式。许多高性能计算集群(HPCC)仍然使用千兆以太网作为连接所有计算节点的互连,尽管存在更快(更低延迟和更高带宽)的替代方案,如Infiniband和Myrinet。互连的选择主要取决于并行应用程序通信特性以及预算需求,因为与千兆以太网相比,更快的替代方案要昂贵得多,特别是在节点数量较少的情况下。一些应用程序需要更低延迟的互连,因为它们通信更频繁,但发送相对较小的消息,而其他应用程序可能发送不频繁但较大的消息,因此需要更高带宽的互连。由于pc、工作站和服务器是为服务器-客户端类型的环境而设计的,因此网络接口卡(NIC)驱动程序通常通过使用几种中断调节技术/参数(特别是中断节流率(ITR))来针对特定的网络流量模式进行优化。由于在HPCC环境中,并行应用程序通信特征(即网络流量模式)通常与默认设置不同,因此必须确定ITR值,以便为每种类型的应用程序实现最佳的整体系统性能。这张海报将展示为什么这是使用千兆互连连接的高性能计算集群中的一个重要领域。本文将介绍如何调整给定给驱动程序的中断节流率参数,以实现应用程序和网络性能之间的平衡。典型应用程序的性能结果将显示在不同的集群上
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Performance Effects of Interrupt Throttle Rate on Linux Clusters using Intel Gigabit Network Adapters
Summary form only given. Many high performance computing clusters (HPCC) are still built using gigabit Ethernet as the interconnect connecting all the computing nodes even though there are faster (lower latency and higher bandwidth) alternatives such as Infiniband and Myrinet. The choice of interconnect mainly depends on the parallel application communication characteristics as well as budget requirements since the faster alternatives are much more expensive compared to gigabit Ethernet especially at lower node counts. Some applications require lower latency interconnect since they communicate more frequently but send relatively small messages, and others can be sending infrequent but large messages thus requiring a higher bandwidth interconnect. Since PCs, workstations and servers are designed for server-client type of environment, network interface card (NIC) drivers are usually optimized for specific network traffic patterns by using several interrupt moderation techniques/parameters, specifically interrupt throttle rate (ITR). Since in an HPCC environment the parallel application communication characteristics (i.e. network traffic patterns) are usually different than the default setting, an ITR value has to be identified to achieve best overall system performance for each type of application. This poster will present the case for why this is an important area in high-performance computing clusters connected using gigabit interconnects. It will present methodologies to tune the interrupt throttle rate parameter given to the driver to achieve a balance between application and network performance. Performance results on typical applications will be shown on different clusters
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信