Proceedings of the 23rd European MPI Users' Group Meeting最新文献

筛选
英文 中文
How I Learned to Stop Worrying and Love In Situ Analytics: Leveraging Latent Synchronization in MPI Collective Algorithms 我是如何学会停止担忧和热爱原位分析的:利用MPI集体算法中的潜在同步
Proceedings of the 23rd European MPI Users' Group Meeting Pub Date : 2016-09-25 DOI: 10.1145/2966884.2966920
Scott Levy, Kurt B. Ferreira, Patrick M. Widener, P. Bridges, Oscar H. Mondragon
{"title":"How I Learned to Stop Worrying and Love In Situ Analytics: Leveraging Latent Synchronization in MPI Collective Algorithms","authors":"Scott Levy, Kurt B. Ferreira, Patrick M. Widener, P. Bridges, Oscar H. Mondragon","doi":"10.1145/2966884.2966920","DOIUrl":"https://doi.org/10.1145/2966884.2966920","url":null,"abstract":"Scientific workloads running on current extreme-scale systems routinely generate tremendous volumes of data for postprocessing. This data movement has become a serious issue due to its energy cost and the fact that I/O bandwidths have not kept pace with data generation rates. In situ analytics is an increasingly popular alternative in which post-simulation processing is embedded into an application, running as part of the same MPI job. This can reduce data movement costs but introduces a new potential source of interference for the application. Using a validated simulation-based approach, we investigate how best to mitigate the interference from time-shared in situ tasks for a number of key extreme-scale workloads. This paper makes a number of contributions. First, we show that the independent scheduling of in situ analytics tasks can significantly degradation application performance, with slowdowns exceeding 1000%. Second, we demonstrate that the degree of synchronization found in many modern collective algorithms is sufficient to significantly reduce the overheads of this interference to less than 10% in most cases. Finally, we show that many applications already frequently invoke collective operations that use these synchronizing MPI algorithms. Therefore, the syncronization introduced by these MPI collective algorithms can be leveraged to efficiently schedule analytics tasks with minimal changes to existing applications. This paper provides critical analysis and guidance for MPI users and developers on the importance of scheduling in situ analytics tasks. It shows the degree of synchronization needed to mitigate the performance impacts of these time-shared coupled codes and demonstrates how that synchronization can be realized in an extreme-scale environment using modern collective algorithms.","PeriodicalId":264069,"journal":{"name":"Proceedings of the 23rd European MPI Users' Group Meeting","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131671974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Library for Advanced Datatype Programming 高级数据类型编程库
Proceedings of the 23rd European MPI Users' Group Meeting Pub Date : 2016-09-25 DOI: 10.1145/2966884.2966904
J. Träff
{"title":"A Library for Advanced Datatype Programming","authors":"J. Träff","doi":"10.1145/2966884.2966904","DOIUrl":"https://doi.org/10.1145/2966884.2966904","url":null,"abstract":"We present a library providing functionality beyond the MPI standard for manipulating application data layouts described by MPI derived datatypes. The main contributions are: a) Constructors for several, new datatypes for describing application relevant data layouts. b) A set of extent-free constructors that eliminate the need for type resizing. c) New navigation and query functionality for accessing individual data elements in layouts described by datatypes, and for comparing layouts. d) Representation of datatype signatures by explicit, associated signature types, as well as functionality for explicit generation of type maps. As a simple application, we implement reduction collectives on noncontiguous, but homogeneous derived datatypes. Some of the proposed functionality could be implemented more efficiently within an MPI library.","PeriodicalId":264069,"journal":{"name":"Proceedings of the 23rd European MPI Users' Group Meeting","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130906533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
The Potential of Diffusive Load Balancing at Large Scale 大规模扩散负载平衡的潜力
Proceedings of the 23rd European MPI Users' Group Meeting Pub Date : 2016-09-25 DOI: 10.1145/2966884.2966887
Matthias Lieber, Kerstin Gößner, W. Nagel
{"title":"The Potential of Diffusive Load Balancing at Large Scale","authors":"Matthias Lieber, Kerstin Gößner, W. Nagel","doi":"10.1145/2966884.2966887","DOIUrl":"https://doi.org/10.1145/2966884.2966887","url":null,"abstract":"Dynamic load balancing with diffusive methods is known to provide minimal load transfer and requires communication between neighbor nodes only. These are very attractive properties for highly parallel systems. We compare diffusive methods with state-of-the-art geometrical and graph-based partitioning methods on thousands of nodes. When load balancing overheads, i.e. repartitioning computation time and migration, have to be minimized, diffusive methods provide substantial benefits.","PeriodicalId":264069,"journal":{"name":"Proceedings of the 23rd European MPI Users' Group Meeting","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132098519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Proceedings of the 23rd European MPI Users' Group Meeting 第23届欧洲MPI用户组会议记录
Proceedings of the 23rd European MPI Users' Group Meeting Pub Date : 2016-09-25 DOI: 10.1145/2966884
J. Dongarra, Daniel Holmes, A. Collis, J. Träff, Lorna Smith
{"title":"Proceedings of the 23rd European MPI Users' Group Meeting","authors":"J. Dongarra, Daniel Holmes, A. Collis, J. Träff, Lorna Smith","doi":"10.1145/2966884","DOIUrl":"https://doi.org/10.1145/2966884","url":null,"abstract":"EuroMPI is the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). The annual meeting has a long, rich tradition, and were held in Madrid (2013), Vienna (2012), Santorini (2011), Stuttgart (2010), Espoo (2009), Dublin (2008), Paris (2007), Bonn (2006), Sorrento (2005), Budapest (2004), Venice (2003), Linz (2002), Santorini (2001), Balatonfured (2000), Barcelona (1999), Liverpool (1998), Cracow (1997), Munich (1996), Lyon (1995), and Rome (1994).","PeriodicalId":264069,"journal":{"name":"Proceedings of the 23rd European MPI Users' Group Meeting","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123142699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning 利用NCCL和cuda感知MPI进行深度学习的高效大消息广播
Proceedings of the 23rd European MPI Users' Group Meeting Pub Date : 2016-09-25 DOI: 10.1145/2966884.2966912
A. Awan, Khaled Hamidouche, Akshay Venkatesh, D. Panda
{"title":"Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning","authors":"A. Awan, Khaled Hamidouche, Akshay Venkatesh, D. Panda","doi":"10.1145/2966884.2966912","DOIUrl":"https://doi.org/10.1145/2966884.2966912","url":null,"abstract":"Emerging paradigms like High Performance Data Analytics (HPDA) and Deep Learning (DL) pose at least two new design challenges for existing MPI runtimes. First, these paradigms require an efficient support for communicating unusually large messages across processes. And second, the communication buffers used by HPDA applications and DL frameworks generally reside on a GPU's memory. In this context, we observe that conventional MPI runtimes have been optimized over decades to achieve lowest possible communication latency for relatively smaller message sizes (up-to 1 Megabyte) and that too for CPU memory buffers. With the advent of CUDA-Aware MPI runtimes, a lot of research has been conducted to improve performance of GPU buffer based communication. However, little exists in current state of the art that deals with very large message communication of GPU buffers. In this paper, we investigate these new challenges by analyzing the performance bottlenecks in existing CUDA-Aware MPI runtimes like MVAPICH2-GDR, and propose hierarchical collective designs to improve communication latency of the MPI_Bcast primitive by exploiting a new communication library called NCCL. To the best of our knowledge, this is the first work that addresses these new requirements where GPU buffers are used for communication with message sizes surpassing hundreds of megabytes. We highlight the design challenges for our work along with the details of design and implementation. In addition, we provide a comprehensive performance evaluation using a Micro-benchmark and a CUDA-Aware adaptation of Microsoft CNTK DL framework. We report up to 47% improvement in training time for CNTK using the proposed hierarchical MPI_Bcast design.","PeriodicalId":264069,"journal":{"name":"Proceedings of the 23rd European MPI Users' Group Meeting","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123876262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Towards millions of communicating threads 面向数百万个通信线程
Proceedings of the 23rd European MPI Users' Group Meeting Pub Date : 2016-09-25 DOI: 10.1145/2966884.2966914
Hoang-Vu Dang, M. Snir, W. Gropp
{"title":"Towards millions of communicating threads","authors":"Hoang-Vu Dang, M. Snir, W. Gropp","doi":"10.1145/2966884.2966914","DOIUrl":"https://doi.org/10.1145/2966884.2966914","url":null,"abstract":"We explore in this paper the advantages that accrue from avoiding the use of wildcards in MPI. We show that, with this change, one can efficiently support millions of concurrently communicating light-weight threads using send-receive communication.","PeriodicalId":264069,"journal":{"name":"Proceedings of the 23rd European MPI Users' Group Meeting","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116072976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Revisiting RDMA Buffer Registration in the Context of Lightweight Multi-kernels 重述轻量级多内核环境下的RDMA缓冲区注册
Proceedings of the 23rd European MPI Users' Group Meeting Pub Date : 2016-09-25 DOI: 10.1145/2966884.2966888
Balazs Gerofi, Masamichi Takagi, Y. Ishikawa
{"title":"Revisiting RDMA Buffer Registration in the Context of Lightweight Multi-kernels","authors":"Balazs Gerofi, Masamichi Takagi, Y. Ishikawa","doi":"10.1145/2966884.2966888","DOIUrl":"https://doi.org/10.1145/2966884.2966888","url":null,"abstract":"Lightweight multi-kernel architectures, where HPC specialized lightweight kernels (LWKs) run side-by-side with Linux on compute nodes, have received a great deal of attention recently due to their potential for addressing many of the challenges system software faces as we move towards exascale and beyond. LWKs in multi-kernels implement only a limited set of kernel functionality and the rest is supported by Linux, for example, device drivers for high-performance interconnects. While most of the operations of modern high-performance interconnects are driven entirely by user-space, memory registration for remote direct memory access (RDMA) usually involves interaction with the Linux device driver and thus comes at the price of service offloading. In this paper we introduce various optimizations for multi-kernel LWKs to eliminate the memory registration cost. In particular, we propose a safe RDMA pre-registration mechanism combined with lazy memory unmapping in the LWK. We demonstrate up to two orders of magnitude improvement in RDMA registration latency and up to 15% improvement on MPI_Allreduce() for large message sizes.","PeriodicalId":264069,"journal":{"name":"Proceedings of the 23rd European MPI Users' Group Meeting","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130524785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Runtime Correctness Analysis of MPI-3 Nonblocking Collectives MPI-3非阻塞集合运行时正确性分析
Proceedings of the 23rd European MPI Users' Group Meeting Pub Date : 2016-09-25 DOI: 10.1145/2966884.2966906
Tobias Hilbrich, Matthias Weber, Joachim Protze, B. Supinski, W. Nagel
{"title":"Runtime Correctness Analysis of MPI-3 Nonblocking Collectives","authors":"Tobias Hilbrich, Matthias Weber, Joachim Protze, B. Supinski, W. Nagel","doi":"10.1145/2966884.2966906","DOIUrl":"https://doi.org/10.1145/2966884.2966906","url":null,"abstract":"The Message Passing Interface (MPI) includes nonblocking collective operations that support additional overlap between computation and communication. These new operations enable complex data movement between large numbers of processes. However, their asynchronous behavior hides and complicates the detection of defects in their use. We highlight a lack of correctness tool support for these operations and extend the MUST runtime MPI correctness tool to alleviate this complexity. We introduce a classification to summarize the types of correctness analyses that are applicable to MPI's nonblocking collectives. We identify complex wait-for dependencies in deadlock situations and incorrect use of communication buffers as the most challenging types of usage errors. We devise, demonstrate, and evaluate the applicability of correctness analyses for these errors. A scalable analysis mechanism allows our runtime approach to scale with the application. Benchmark measurements highlight the scalability and applicability of our approach at up to 4,096 application processes and with low overhead.","PeriodicalId":264069,"journal":{"name":"Proceedings of the 23rd European MPI Users' Group Meeting","volume":"1246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129430702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信