Performance Analysis of Parallel Programs in HPC in Cloud

Mayrin George, Neenu Mary Margret, N. Nelson
{"title":"Performance Analysis of Parallel Programs in HPC in Cloud","authors":"Mayrin George, Neenu Mary Margret, N. Nelson","doi":"10.1109/SCEECS.2018.8546951","DOIUrl":null,"url":null,"abstract":"From this paper an understanding of how applications can take advantage of modern parallel workable architectures to reduce the computational time using the wide array of models existing up to date is obtained. The performance exhibited by a single device is analyzed against parallel working architectures based on modular division of work. A private cloud has been used to get the results. A minimum of two computers are required for cluster formation. The execution speed is analyzed between parallel run devices against a single device run algorithm. One of the major point in parallel programming is the reconfiguration of the existing applications to work on parallel systems bringing out faster work results and increased efficiency. MPICH2 (message passing interface software) is used which is a standardized and portable message passing system. The MPI language helps to work on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message passing programs in C, C++, and Fortran. For doing analysis Score-P has been used which gives the necessary information on the trace buffer, number of visits to each function, time taken by each function, visit/time (um) and so on of the parallel program run. A graphical analysis is done for the work performed in physical cluster, cloud cluster and HPC cluster.","PeriodicalId":446667,"journal":{"name":"2018 IEEE International Students' Conference on Electrical, Electronics and Computer Science (SCEECS)","volume":"520 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Students' Conference on Electrical, Electronics and Computer Science (SCEECS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SCEECS.2018.8546951","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

From this paper an understanding of how applications can take advantage of modern parallel workable architectures to reduce the computational time using the wide array of models existing up to date is obtained. The performance exhibited by a single device is analyzed against parallel working architectures based on modular division of work. A private cloud has been used to get the results. A minimum of two computers are required for cluster formation. The execution speed is analyzed between parallel run devices against a single device run algorithm. One of the major point in parallel programming is the reconfiguration of the existing applications to work on parallel systems bringing out faster work results and increased efficiency. MPICH2 (message passing interface software) is used which is a standardized and portable message passing system. The MPI language helps to work on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message passing programs in C, C++, and Fortran. For doing analysis Score-P has been used which gives the necessary information on the trace buffer, number of visits to each function, time taken by each function, visit/time (um) and so on of the parallel program run. A graphical analysis is done for the work performed in physical cluster, cloud cluster and HPC cluster.
云环境下HPC并行程序性能分析
从本文中,我们了解了应用程序如何利用现代并行可操作的体系结构,利用现有的各种模型来减少计算时间。在基于模块化分工的并行工作架构下,分析了单个器件的性能。使用私有云来获得结果。星团的形成至少需要两台计算机。针对单设备运行算法,分析了并行运行设备之间的执行速度。并行编程的一个重点是重新配置现有的应用程序,以便在并行系统上工作,从而获得更快的工作结果并提高效率。MPICH2(消息传递接口软件)是一个标准化的、可移植的消息传递系统。MPI语言有助于在各种各样的并行计算体系结构上工作。该标准定义了一组核心库例程的语法和语义,这些例程对于用C、c++和Fortran编写可移植消息传递程序的广大用户非常有用。为了进行分析,使用了Score-P,它给出了跟踪缓冲区的必要信息,每个函数的访问次数,每个函数所花费的时间,并行程序运行的访问/时间(um)等。对物理集群、云集群和高性能计算集群的工作进行了图形化分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信