{"title":"Performance Analysis of Parallel Programs in HPC in Cloud","authors":"Mayrin George, Neenu Mary Margret, N. Nelson","doi":"10.1109/SCEECS.2018.8546951","DOIUrl":null,"url":null,"abstract":"From this paper an understanding of how applications can take advantage of modern parallel workable architectures to reduce the computational time using the wide array of models existing up to date is obtained. The performance exhibited by a single device is analyzed against parallel working architectures based on modular division of work. A private cloud has been used to get the results. A minimum of two computers are required for cluster formation. The execution speed is analyzed between parallel run devices against a single device run algorithm. One of the major point in parallel programming is the reconfiguration of the existing applications to work on parallel systems bringing out faster work results and increased efficiency. MPICH2 (message passing interface software) is used which is a standardized and portable message passing system. The MPI language helps to work on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message passing programs in C, C++, and Fortran. For doing analysis Score-P has been used which gives the necessary information on the trace buffer, number of visits to each function, time taken by each function, visit/time (um) and so on of the parallel program run. A graphical analysis is done for the work performed in physical cluster, cloud cluster and HPC cluster.","PeriodicalId":446667,"journal":{"name":"2018 IEEE International Students' Conference on Electrical, Electronics and Computer Science (SCEECS)","volume":"520 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Students' Conference on Electrical, Electronics and Computer Science (SCEECS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SCEECS.2018.8546951","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
From this paper an understanding of how applications can take advantage of modern parallel workable architectures to reduce the computational time using the wide array of models existing up to date is obtained. The performance exhibited by a single device is analyzed against parallel working architectures based on modular division of work. A private cloud has been used to get the results. A minimum of two computers are required for cluster formation. The execution speed is analyzed between parallel run devices against a single device run algorithm. One of the major point in parallel programming is the reconfiguration of the existing applications to work on parallel systems bringing out faster work results and increased efficiency. MPICH2 (message passing interface software) is used which is a standardized and portable message passing system. The MPI language helps to work on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message passing programs in C, C++, and Fortran. For doing analysis Score-P has been used which gives the necessary information on the trace buffer, number of visits to each function, time taken by each function, visit/time (um) and so on of the parallel program run. A graphical analysis is done for the work performed in physical cluster, cloud cluster and HPC cluster.