Improving the Interoperability between MPI and Task-Based Programming Models

Kevin Sala, Jorge Bellón, Pau Farré, Xavier Teruel, Josep M. Pérez, Antonio J. Peña, Daniel J. Holmes, Vicencc Beltran, Jesús Labarta
{"title":"Improving the Interoperability between MPI and Task-Based Programming Models","authors":"Kevin Sala, Jorge Bellón, Pau Farré, Xavier Teruel, Josep M. Pérez, Antonio J. Peña, Daniel J. Holmes, Vicencc Beltran, Jesús Labarta","doi":"10.1145/3236367.3236382","DOIUrl":null,"url":null,"abstract":"In this paper we propose an API to pause and resume task execution depending on external events. We leverage this generic API to improve the interoperability between MPI synchronous communication primitives and tasks. When an MPI operation blocks, the task running is paused so that the runtime system can schedule a new task on the core that became idle. Once the MPI operation is completed, the paused task is put again on the runtime system's ready queue. We expose our proposal through a new MPI threading level which we implement through two approaches. The first approach is an MPI wrapper library that works with any MPI implementation by intercepting MPI synchronous calls, implementing them on top of their asynchronous counterparts. In this case, the task-based runtime system is also extended to periodically check for pending MPI operations and resume the corresponding tasks once MPI operations complete. The second approach consists in directly modifying the MPICH runtime system, a well-known implementation of MPI, to directly call the pause/resume API when a synchronous MPI operation blocks and completes, respectively. Our experiments reveal that this proposal not only simplifies the development of hybrid MPI+OpenMP applications that naturally overlap computation and communication phases; it also improves application performance and scalability by removing artificial dependencies across communication tasks.","PeriodicalId":225539,"journal":{"name":"Proceedings of the 25th European MPI Users' Group Meeting","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 25th European MPI Users' Group Meeting","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3236367.3236382","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19

Abstract

In this paper we propose an API to pause and resume task execution depending on external events. We leverage this generic API to improve the interoperability between MPI synchronous communication primitives and tasks. When an MPI operation blocks, the task running is paused so that the runtime system can schedule a new task on the core that became idle. Once the MPI operation is completed, the paused task is put again on the runtime system's ready queue. We expose our proposal through a new MPI threading level which we implement through two approaches. The first approach is an MPI wrapper library that works with any MPI implementation by intercepting MPI synchronous calls, implementing them on top of their asynchronous counterparts. In this case, the task-based runtime system is also extended to periodically check for pending MPI operations and resume the corresponding tasks once MPI operations complete. The second approach consists in directly modifying the MPICH runtime system, a well-known implementation of MPI, to directly call the pause/resume API when a synchronous MPI operation blocks and completes, respectively. Our experiments reveal that this proposal not only simplifies the development of hybrid MPI+OpenMP applications that naturally overlap computation and communication phases; it also improves application performance and scalability by removing artificial dependencies across communication tasks.
改进MPI和基于任务的编程模型之间的互操作性
在本文中,我们提出了一个API来根据外部事件暂停和恢复任务执行。我们利用这个通用API来改进MPI同步通信原语和任务之间的互操作性。当MPI操作阻塞时,将暂停正在运行的任务,以便运行时系统可以在空闲的核心上调度新任务。MPI操作完成后,暂停的任务将再次放到运行时系统的就绪队列中。我们通过一个新的MPI线程级别公开我们的建议,我们通过两种方法实现。第一种方法是MPI包装器库,它通过拦截MPI同步调用,在异步调用的基础上实现它们,从而与任何MPI实现一起工作。在这种情况下,基于任务的运行时系统也被扩展为定期检查挂起的MPI操作,并在MPI操作完成后恢复相应的任务。第二种方法是直接修改MPICH运行时系统(MPI的一个众所周知的实现),以便在同步MPI操作阻塞和完成时分别直接调用暂停/恢复API。我们的实验表明,该方案不仅简化了混合MPI+OpenMP应用程序的开发,使计算和通信阶段自然重叠;它还通过消除通信任务之间的人为依赖关系来提高应用程序的性能和可伸缩性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信