Muhammed Abdullah Al Ahad, C. Simmendinger, R. Iakymchuk, E. Laure, S. Markidis
{"title":"Efficient Algorithms for Collective Operations with Notified Communication in Shared Windows","authors":"Muhammed Abdullah Al Ahad, C. Simmendinger, R. Iakymchuk, E. Laure, S. Markidis","doi":"10.1109/PAW-ATM.2018.00006","DOIUrl":null,"url":null,"abstract":"Collective operations are commonly used in various parts of scientific applications. Especially in strong scaling scenarios collective operations can negatively impact the overall applications performance: while the load per rank here decreases with increasing core counts, time spent in e.g. barrier operations will increase logarithmically with the core count. In this article, we develop novel algorithmic solutions for collective operations -- such as Allreduce and Allgather(V) -- by leveraging notified communication in shared windows. To this end, we have developed an extension of GASPI which enables all ranks participating in a shared window to observe the entire notified communication targeted at the window. By exploring benefits of this extension, we deliver high performing implementations of Allreduce and Allgather(V) on Intel and Cray clusters. These implementations clearly achieve 2x-4x performance improvements compared to the best performing MPI implementations for various data distributions.","PeriodicalId":368346,"journal":{"name":"2018 IEEE/ACM Parallel Applications Workshop, Alternatives To MPI (PAW-ATM)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/ACM Parallel Applications Workshop, Alternatives To MPI (PAW-ATM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PAW-ATM.2018.00006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Collective operations are commonly used in various parts of scientific applications. Especially in strong scaling scenarios collective operations can negatively impact the overall applications performance: while the load per rank here decreases with increasing core counts, time spent in e.g. barrier operations will increase logarithmically with the core count. In this article, we develop novel algorithmic solutions for collective operations -- such as Allreduce and Allgather(V) -- by leveraging notified communication in shared windows. To this end, we have developed an extension of GASPI which enables all ranks participating in a shared window to observe the entire notified communication targeted at the window. By exploring benefits of this extension, we deliver high performing implementations of Allreduce and Allgather(V) on Intel and Cray clusters. These implementations clearly achieve 2x-4x performance improvements compared to the best performing MPI implementations for various data distributions.