基于InfiniBand的下一代Intel Bensley平台内存可扩展性评估

Matthew J. Koop, Wei Huang, Abhinav Vishnu, D. Panda
{"title":"基于InfiniBand的下一代Intel Bensley平台内存可扩展性评估","authors":"Matthew J. Koop, Wei Huang, Abhinav Vishnu, D. Panda","doi":"10.1109/HOTI.2006.19","DOIUrl":null,"url":null,"abstract":"As multi-core systems gain popularity for their increased computing power at low-cost, the rest of the architecture must be kept in balance, such as the memory subsystem. Many existing memory subsystems can suffer from scalability issues and show memory performance degradation with more than one process running. To address these scalability issues, fully-buffered DIMMs have recently been introduced. In this paper we present an initial performance evaluation of the next-generation multi-core Intel platform by evaluating the FB-DIMM-based memory subsystem and the associated InfiniBand performance. To the best of our knowledge this is the first such study of Intel multi-core platforms with multi-rail InfiniBand DDR configurations. We provide an evaluation of the current-generation Intel Lindenhurst platform as a reference point. We find that the Intel Bensley platform can provide memory scalability to support memory accesses by multiple processes on the same machine as well as drastically improved inter-node throughput over InfiniBand. On the Bensley platform we observe a 1.85 times increase in aggregate write bandwidth over the Lindenhurst platform. For inter-node MPI-level benchmarks we show bi-directional bandwidth of over 4.55 GB/sec for the Bensley platform using 2 DDR InfiniBand host channel adapters (HCAs), an improvement of 77% over the current generation Lindenhurst platform. The Bensley system is also able to achieve a throughput of 3.12 million MPI messages/sec in the above configuration","PeriodicalId":288349,"journal":{"name":"14th IEEE Symposium on High-Performance Interconnects (HOTI'06)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Memory Scalability Evaluation of the Next-Generation Intel Bensley Platform with InfiniBand\",\"authors\":\"Matthew J. Koop, Wei Huang, Abhinav Vishnu, D. Panda\",\"doi\":\"10.1109/HOTI.2006.19\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As multi-core systems gain popularity for their increased computing power at low-cost, the rest of the architecture must be kept in balance, such as the memory subsystem. Many existing memory subsystems can suffer from scalability issues and show memory performance degradation with more than one process running. To address these scalability issues, fully-buffered DIMMs have recently been introduced. In this paper we present an initial performance evaluation of the next-generation multi-core Intel platform by evaluating the FB-DIMM-based memory subsystem and the associated InfiniBand performance. To the best of our knowledge this is the first such study of Intel multi-core platforms with multi-rail InfiniBand DDR configurations. We provide an evaluation of the current-generation Intel Lindenhurst platform as a reference point. We find that the Intel Bensley platform can provide memory scalability to support memory accesses by multiple processes on the same machine as well as drastically improved inter-node throughput over InfiniBand. On the Bensley platform we observe a 1.85 times increase in aggregate write bandwidth over the Lindenhurst platform. For inter-node MPI-level benchmarks we show bi-directional bandwidth of over 4.55 GB/sec for the Bensley platform using 2 DDR InfiniBand host channel adapters (HCAs), an improvement of 77% over the current generation Lindenhurst platform. The Bensley system is also able to achieve a throughput of 3.12 million MPI messages/sec in the above configuration\",\"PeriodicalId\":288349,\"journal\":{\"name\":\"14th IEEE Symposium on High-Performance Interconnects (HOTI'06)\",\"volume\":\"95 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"14th IEEE Symposium on High-Performance Interconnects (HOTI'06)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HOTI.2006.19\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"14th IEEE Symposium on High-Performance Interconnects (HOTI'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HOTI.2006.19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

随着多核系统以低成本提高计算能力而越来越受欢迎,体系结构的其余部分必须保持平衡,例如内存子系统。许多现有的内存子系统可能存在可伸缩性问题,并且在运行多个进程时显示内存性能下降。为了解决这些可伸缩性问题,最近引入了全缓冲内存。在本文中,我们通过评估基于fb - dimm的内存子系统和相关的InfiniBand性能,对下一代多核英特尔平台进行了初步的性能评估。据我们所知,这是第一次对英特尔多核平台进行多轨ib DDR配置的研究。我们提供了当前一代英特尔Lindenhurst平台的评估作为参考点。我们发现英特尔Bensley平台可以提供内存可扩展性,以支持同一台机器上多个进程的内存访问,并通过InfiniBand大幅提高节点间吞吐量。在Bensley平台上,我们观察到总的写带宽比Lindenhurst平台增加了1.85倍。对于节点间mpi级基准测试,我们展示了使用2 DDR InfiniBand主机通道适配器(hca)的Bensley平台的双向带宽超过4.55 GB/秒,比当前一代Lindenhurst平台提高了77%。在上述配置中,Bensley系统还能够实现312万MPI消息/秒的吞吐量
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Memory Scalability Evaluation of the Next-Generation Intel Bensley Platform with InfiniBand
As multi-core systems gain popularity for their increased computing power at low-cost, the rest of the architecture must be kept in balance, such as the memory subsystem. Many existing memory subsystems can suffer from scalability issues and show memory performance degradation with more than one process running. To address these scalability issues, fully-buffered DIMMs have recently been introduced. In this paper we present an initial performance evaluation of the next-generation multi-core Intel platform by evaluating the FB-DIMM-based memory subsystem and the associated InfiniBand performance. To the best of our knowledge this is the first such study of Intel multi-core platforms with multi-rail InfiniBand DDR configurations. We provide an evaluation of the current-generation Intel Lindenhurst platform as a reference point. We find that the Intel Bensley platform can provide memory scalability to support memory accesses by multiple processes on the same machine as well as drastically improved inter-node throughput over InfiniBand. On the Bensley platform we observe a 1.85 times increase in aggregate write bandwidth over the Lindenhurst platform. For inter-node MPI-level benchmarks we show bi-directional bandwidth of over 4.55 GB/sec for the Bensley platform using 2 DDR InfiniBand host channel adapters (HCAs), an improvement of 77% over the current generation Lindenhurst platform. The Bensley system is also able to achieve a throughput of 3.12 million MPI messages/sec in the above configuration
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信