Jiuxing Liu, A. Mamidala, Abhinav Vishnu, D. Panda
{"title":"Performance evaluation of InfiniBand with PCI Express","authors":"Jiuxing Liu, A. Mamidala, Abhinav Vishnu, D. Panda","doi":"10.1109/CONECT.2004.1375193","DOIUrl":null,"url":null,"abstract":"We present an initial performance evaluation of InfiniBand HCAs (host channel adapters) from Mellanox with PCI Express interfaces. We compare the performance with HCAs using PCI-X interfaces. Our results show that InfiniBand HCAs with PCI Express can achieve significant performance benefits. Compared with HCAs using 64 bit/133 MHz PCI-X interfaces, they can achieve 20%-30% lower latency for small messages. The small message latency achieved with PCI Express is around 3.8 /spl mu/s, compared with the 5.0 /spl mu/s with PCI-X. For large messages, HCAs with PCI Express using a single port can deliver unidirectional bandwidth up to 968 MB/s and bidirectional bandwidth up to 1916 MB/s, which are, respectively, 1.24 and 2.02 times the peak bandwidths achieved by HCAs with PCI-X. When both the ports of the HCAs are activated, HCAs with PCI Express can deliver a peak unidirectional bandwidth of 1486 MB/s and aggregate bidirectional bandwidth up to 2729 MB/s, which are 1.93 and 2.88 times the peak bandwidths obtained using HCAs with PCI-X. PCI Express also improves performance at the MPI level. A latency of 4.6 /spl mu/s with PCI Express is achieved for small messages. And for large messages, unidirectional bandwidth of 1497 MB/s and bidirectional bandwidth of 2724 MB/s are observed.","PeriodicalId":224195,"journal":{"name":"Proceedings. 12th Annual IEEE Symposium on High Performance Interconnects","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. 12th Annual IEEE Symposium on High Performance Interconnects","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONECT.2004.1375193","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26
Abstract
We present an initial performance evaluation of InfiniBand HCAs (host channel adapters) from Mellanox with PCI Express interfaces. We compare the performance with HCAs using PCI-X interfaces. Our results show that InfiniBand HCAs with PCI Express can achieve significant performance benefits. Compared with HCAs using 64 bit/133 MHz PCI-X interfaces, they can achieve 20%-30% lower latency for small messages. The small message latency achieved with PCI Express is around 3.8 /spl mu/s, compared with the 5.0 /spl mu/s with PCI-X. For large messages, HCAs with PCI Express using a single port can deliver unidirectional bandwidth up to 968 MB/s and bidirectional bandwidth up to 1916 MB/s, which are, respectively, 1.24 and 2.02 times the peak bandwidths achieved by HCAs with PCI-X. When both the ports of the HCAs are activated, HCAs with PCI Express can deliver a peak unidirectional bandwidth of 1486 MB/s and aggregate bidirectional bandwidth up to 2729 MB/s, which are 1.93 and 2.88 times the peak bandwidths obtained using HCAs with PCI-X. PCI Express also improves performance at the MPI level. A latency of 4.6 /spl mu/s with PCI Express is achieved for small messages. And for large messages, unidirectional bandwidth of 1497 MB/s and bidirectional bandwidth of 2724 MB/s are observed.