Matthew J. Koop, Wei Huang, Abhinav Vishnu, D. Panda
{"title":"Memory Scalability Evaluation of the Next-Generation Intel Bensley Platform with InfiniBand","authors":"Matthew J. Koop, Wei Huang, Abhinav Vishnu, D. Panda","doi":"10.1109/HOTI.2006.19","DOIUrl":null,"url":null,"abstract":"As multi-core systems gain popularity for their increased computing power at low-cost, the rest of the architecture must be kept in balance, such as the memory subsystem. Many existing memory subsystems can suffer from scalability issues and show memory performance degradation with more than one process running. To address these scalability issues, fully-buffered DIMMs have recently been introduced. In this paper we present an initial performance evaluation of the next-generation multi-core Intel platform by evaluating the FB-DIMM-based memory subsystem and the associated InfiniBand performance. To the best of our knowledge this is the first such study of Intel multi-core platforms with multi-rail InfiniBand DDR configurations. We provide an evaluation of the current-generation Intel Lindenhurst platform as a reference point. We find that the Intel Bensley platform can provide memory scalability to support memory accesses by multiple processes on the same machine as well as drastically improved inter-node throughput over InfiniBand. On the Bensley platform we observe a 1.85 times increase in aggregate write bandwidth over the Lindenhurst platform. For inter-node MPI-level benchmarks we show bi-directional bandwidth of over 4.55 GB/sec for the Bensley platform using 2 DDR InfiniBand host channel adapters (HCAs), an improvement of 77% over the current generation Lindenhurst platform. The Bensley system is also able to achieve a throughput of 3.12 million MPI messages/sec in the above configuration","PeriodicalId":288349,"journal":{"name":"14th IEEE Symposium on High-Performance Interconnects (HOTI'06)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"14th IEEE Symposium on High-Performance Interconnects (HOTI'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HOTI.2006.19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
As multi-core systems gain popularity for their increased computing power at low-cost, the rest of the architecture must be kept in balance, such as the memory subsystem. Many existing memory subsystems can suffer from scalability issues and show memory performance degradation with more than one process running. To address these scalability issues, fully-buffered DIMMs have recently been introduced. In this paper we present an initial performance evaluation of the next-generation multi-core Intel platform by evaluating the FB-DIMM-based memory subsystem and the associated InfiniBand performance. To the best of our knowledge this is the first such study of Intel multi-core platforms with multi-rail InfiniBand DDR configurations. We provide an evaluation of the current-generation Intel Lindenhurst platform as a reference point. We find that the Intel Bensley platform can provide memory scalability to support memory accesses by multiple processes on the same machine as well as drastically improved inter-node throughput over InfiniBand. On the Bensley platform we observe a 1.85 times increase in aggregate write bandwidth over the Lindenhurst platform. For inter-node MPI-level benchmarks we show bi-directional bandwidth of over 4.55 GB/sec for the Bensley platform using 2 DDR InfiniBand host channel adapters (HCAs), an improvement of 77% over the current generation Lindenhurst platform. The Bensley system is also able to achieve a throughput of 3.12 million MPI messages/sec in the above configuration