HPC with many core processors

X. Martorell, Jorge Bellón, Víctor López, Vicencc Beltran, Sergi Mateo, Xavier Teruel, E. Ayguadé, Jesús Labarta
{"title":"HPC with many core processors","authors":"X. Martorell, Jorge Bellón, Víctor López, Vicencc Beltran, Sergi Mateo, Xavier Teruel, E. Ayguadé, Jesús Labarta","doi":"10.1049/pbpc022e_ch1","DOIUrl":null,"url":null,"abstract":"The current trends in building clusters and supercomputers are to use medium-to-big symmetric multi-processors (SMP) nodes connected through a high-speed network. Applications need to accommodate to these execution environments using distributed and shared memory programming, and thus become hybrid. Hybrid applications are written with two or more programming models, usually message passing interface (MPI) [1,2] for the distributed environment and OpenMP [3,4] for the shared memory support. The goal of this chapter is to show how the two programming models can be made interoperable and ease the work of the programmer. Thus, instead of asking the programmers to code optimizations targeting performance, it is possible to rely on the good interoperability between the programming models to achieve high performance. For example, instead of using non-blocking message passing and double buffering to achieve computation-communication overlap, our approach provides this feature by taskifying communications using OpenMP tasks [5,6].","PeriodicalId":254920,"journal":{"name":"Many-Core Computing: Hardware and Software","volume":"150 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Many-Core Computing: Hardware and Software","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1049/pbpc022e_ch1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The current trends in building clusters and supercomputers are to use medium-to-big symmetric multi-processors (SMP) nodes connected through a high-speed network. Applications need to accommodate to these execution environments using distributed and shared memory programming, and thus become hybrid. Hybrid applications are written with two or more programming models, usually message passing interface (MPI) [1,2] for the distributed environment and OpenMP [3,4] for the shared memory support. The goal of this chapter is to show how the two programming models can be made interoperable and ease the work of the programmer. Thus, instead of asking the programmers to code optimizations targeting performance, it is possible to rely on the good interoperability between the programming models to achieve high performance. For example, instead of using non-blocking message passing and double buffering to achieve computation-communication overlap, our approach provides this feature by taskifying communications using OpenMP tasks [5,6].
具有多核心处理器的HPC
目前构建集群和超级计算机的趋势是使用通过高速网络连接的大中型对称多处理器(SMP)节点。应用程序需要使用分布式和共享内存编程来适应这些执行环境,从而成为混合的。混合应用程序是用两种或两种以上的编程模型编写的,通常是用于分布式环境的消息传递接口(MPI)[1,2]和用于共享内存支持的OpenMP[3,4]。本章的目标是展示如何使这两种编程模型具有互操作性,并简化程序员的工作。因此,与其要求程序员编写针对性能的优化代码,还不如依靠编程模型之间良好的互操作性来实现高性能。例如,我们的方法不是使用非阻塞消息传递和双重缓冲来实现计算-通信重叠,而是通过使用OpenMP任务分配通信任务来提供这一功能[5,6]。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信