{"title":"Space Performance Tradeoffs in Compressing MPI Group Data Structures","authors":"Sameer Kumar, P. Heidelberger, C. Stunkel","doi":"10.1145/2966884.2966911","DOIUrl":null,"url":null,"abstract":"MPI is a popular programming paradigm on parallel machines today. MPI libraries sometimes use O(N) data structures to implement MPI functionality. The IBM Blue Gene/Q machine has 16 GB memory per node. If each node runs 32 MPI processes, only 512 MB is available per process, requiring the MPI library to be space efficient. This scenario will become severe in a future Exascale machine with tens of millions of cores and MPI endpoints. We explore techniques to compress the dense O(N) mapping data structures that map the logical process ID to the global rank. Our techniques minimize topological communicator mapping state by replacing table lookups with a mapping function. We also explore caching schemes with performance results to optimize overheads of the mapping functions for recent translations in multiple MPI micro-benchmarks, and the 3D FFT and Algebraic Multi Grid application benchmarks.","PeriodicalId":264069,"journal":{"name":"Proceedings of the 23rd European MPI Users' Group Meeting","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 23rd European MPI Users' Group Meeting","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2966884.2966911","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
MPI is a popular programming paradigm on parallel machines today. MPI libraries sometimes use O(N) data structures to implement MPI functionality. The IBM Blue Gene/Q machine has 16 GB memory per node. If each node runs 32 MPI processes, only 512 MB is available per process, requiring the MPI library to be space efficient. This scenario will become severe in a future Exascale machine with tens of millions of cores and MPI endpoints. We explore techniques to compress the dense O(N) mapping data structures that map the logical process ID to the global rank. Our techniques minimize topological communicator mapping state by replacing table lookups with a mapping function. We also explore caching schemes with performance results to optimize overheads of the mapping functions for recent translations in multiple MPI micro-benchmarks, and the 3D FFT and Algebraic Multi Grid application benchmarks.
MPI是当今并行机器上流行的编程范例。MPI库有时使用O(N)数据结构来实现MPI功能。IBM Blue Gene/Q机器每个节点有16gb内存。如果每个节点运行32个MPI进程,每个进程只有512 MB可用,这要求MPI库具有空间效率。这种情况在未来拥有数千万核心和MPI端点的百亿亿级计算机中将变得更加严重。我们探索压缩密集O(N)映射数据结构的技术,这些数据结构将逻辑进程ID映射到全局排名。我们的技术通过用映射函数替换表查找来最小化拓扑通信器映射状态。我们还探索了具有性能结果的缓存方案,以优化映射函数在多个MPI微基准测试、3D FFT和Algebraic Multi Grid应用程序基准测试中最近翻译的开销。