Concurr. Pract. Exp.Pub Date : 2000-10-01DOI: 10.1002/1096-9128(200010)12:12%3C1147::AID-CPE526%3E3.0.CO;2-Q
L. Brieger
{"title":"HPF to OpenMP on the Origin2000: a case study","authors":"L. Brieger","doi":"10.1002/1096-9128(200010)12:12%3C1147::AID-CPE526%3E3.0.CO;2-Q","DOIUrl":"https://doi.org/10.1002/1096-9128(200010)12:12%3C1147::AID-CPE526%3E3.0.CO;2-Q","url":null,"abstract":"The geophysics group at CRS4 has long developed echo reconstruction codes in HPF on distributed-memory machines. Now, however, with the arrival of shared-memory machines and their native OpenMP compilers, the transfer to OpenMP would seem to present the logical next step in our code development strategy. Recent experience with porting one of our important HPF codes to OpenMP does not bear this out— at least not on the Origin2000. The OpenMP code suffers from the immaturity of the standard, and the operating system’s handling of UNIX threads seems to severely penalize OpenMP performance. On the other hand, the HPF code on the Origin2000 is fast, scalable and not disproportionately sensitive to load on the machine. Copyright 2000 John Wiley & Sons, Ltd.","PeriodicalId":199059,"journal":{"name":"Concurr. Pract. Exp.","volume":"59 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114101729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concurr. Pract. Exp.Pub Date : 2000-10-01DOI: 10.1002/1096-9128(200010)12:12%3C1131::AID-CPE528%3E3.0.CO;2-2
A. Gebremedhin, F. Manne
{"title":"Scalable parallel graph coloring algorithms","authors":"A. Gebremedhin, F. Manne","doi":"10.1002/1096-9128(200010)12:12%3C1131::AID-CPE528%3E3.0.CO;2-2","DOIUrl":"https://doi.org/10.1002/1096-9128(200010)12:12%3C1131::AID-CPE528%3E3.0.CO;2-2","url":null,"abstract":"SUMMARY Finding a good graph coloring quickly is often a crucial phase in the development of efficient, parallel algorithms for many scientific and engineering applications. In this paper we consider the problem of solving the graph coloring problem itself in parallel. We present a simple and fast parallel graph coloring heuristic that is well suited for shared memory programming and yields an almost linear speedup on the PRAM model. We also present a second heuristic that improves on the number of colors used. The heuristics have been implemented using OpenMP. Experiments conducted on an SGI Cray Origin 2000 supercomputer using very large graphs from finite element methods and eigenvalue computations validate the theoretical run-time analysis. Copyright 2000 John Wiley & Sons, Ltd.","PeriodicalId":199059,"journal":{"name":"Concurr. Pract. Exp.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130063137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concurr. Pract. Exp.Pub Date : 2000-10-01DOI: 10.1002/1096-9128(200010)12:12%3C1177::AID-CPE533%3E3.0.CO;2-V
L. Adhianto, F. Bodin, B. Chapman, L. Hascoët, A. Kneer, D. Lancaster, I. Wolton, M. Wirtz
{"title":"Tools for OpenMP application development: the POST project","authors":"L. Adhianto, F. Bodin, B. Chapman, L. Hascoët, A. Kneer, D. Lancaster, I. Wolton, M. Wirtz","doi":"10.1002/1096-9128(200010)12:12%3C1177::AID-CPE533%3E3.0.CO;2-V","DOIUrl":"https://doi.org/10.1002/1096-9128(200010)12:12%3C1177::AID-CPE533%3E3.0.CO;2-V","url":null,"abstract":"OpenMP was recently proposed by a group of vendors as a programming model for shared memory parallel architectures. Thr growing popularity of such systems, and the rapid availability of product-strength compilers for OpenMP, seem to guarantee a broad take-up of this paradigm if appropriate tools for application development can be provided. POST is an EU-funded project that is developing a productm based on FORESYS from Simulog, which aims to reduce the human effort involved ub the creation of OpenMP code. Additional research within the project focuses on alternative techniques to support OpenMP application development that target a broad variety of users. Functionnality ranges from fully automatic strategies to novice users, the provision of parallelisation hints, and step-by-step strategies to porting code, to a range of transformations and source code analyses that may be used by experts, including the ability to create application specific transformations. The work is accompanied by the development of OpenMP versions of several industrial applications.","PeriodicalId":199059,"journal":{"name":"Concurr. Pract. Exp.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132028262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concurr. Pract. Exp.Pub Date : 2000-10-01DOI: 10.1002/1096-9128(200010)12:12%3C1121::AID-CPE531%3E3.0.CO;2-N
Lorna Smith, P. Kent
{"title":"Development and performance of a mixed OpenMP/MPI quantum Monte Carlo code","authors":"Lorna Smith, P. Kent","doi":"10.1002/1096-9128(200010)12:12%3C1121::AID-CPE531%3E3.0.CO;2-N","DOIUrl":"https://doi.org/10.1002/1096-9128(200010)12:12%3C1121::AID-CPE531%3E3.0.CO;2-N","url":null,"abstract":"| An OpenMP version of a Quantum Monte Carlo (QMC) code has been developed. The original parallel MPI version of the QMC code was developed by the Electronic Structure of Solids HPCI consortium in collaboration with EPCC. This code has been highly successful, and has resulted in numerous publications based on results generated on the National Cray MPP systems at EPCC. Recent interest has focussed on also utilising shared-memory parallelism in the code since future HPC systems are expected to comprise clusters of SMP nodes. The code has been re-written to allow for an arbitrary mix of OpenMP and MPI parallelism. The various issues which arose during the parallelisation are discussed. The performance of the mixed OpenMP/MPI code has been assessed on an SGI Origin 2000 system and the results compared and contrasted to the original MPI version. Keywords| OpenMP, MPI, HPC applications, performance.","PeriodicalId":199059,"journal":{"name":"Concurr. Pract. Exp.","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114525032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concurr. Pract. Exp.Pub Date : 2000-09-01DOI: 10.1002/1096-9128(200009)12:11%3C1051::AID-CPE520%3E3.0.CO;2-M
N. Stankovic
{"title":"An open Java system for SPMD programming","authors":"N. Stankovic","doi":"10.1002/1096-9128(200009)12:11%3C1051::AID-CPE520%3E3.0.CO;2-M","DOIUrl":"https://doi.org/10.1002/1096-9128(200009)12:11%3C1051::AID-CPE520%3E3.0.CO;2-M","url":null,"abstract":"We present here our work aimed at developing an open, network based visual software engineering environment for parallel processing called Visper. It is completely implemented in Java and supports the message-passing model. Java offers the basic platform independent services needed to integrate heterogeneous hardware into a seamless computational resource. Easy installation, participation and flexibility are seen as key properties when using the system. We believe the approach taken simplifies the development and testing of parallel programs by enabling modular, object oriented technique based on our extensions to the Java API. Copyright 2000 John Wiley & Sons, Ltd.","PeriodicalId":199059,"journal":{"name":"Concurr. Pract. Exp.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133805334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concurr. Pract. Exp.Pub Date : 2000-09-01DOI: 10.1002/1096-9128(200009)12:11%3C1039::AID-CPE519%3E3.0.CO;2-B
V. Getov, Paul A. Gray, V. Sunderam
{"title":"Aspects of portability and distributed execution for JNI-wrapped message passing libraries","authors":"V. Getov, Paul A. Gray, V. Sunderam","doi":"10.1002/1096-9128(200009)12:11%3C1039::AID-CPE519%3E3.0.CO;2-B","DOIUrl":"https://doi.org/10.1002/1096-9128(200009)12:11%3C1039::AID-CPE519%3E3.0.CO;2-B","url":null,"abstract":"This paper discusses an approach which aims to provide legacy message passing libraries with Java-like portability in a heterogeneous, metacomputing environment. The results of such portability permit distributed computing components to be soft-loaded or soft-installed in a dynamic fashion, onto cooperating resources for concurrent, synchronized parallel execution. This capability provides researchers with the ability to tap into a much larger resource pool and to utilize highly tuned codes for achievingperformance. Necessarily, the Java programming language is a significant component. The Java Native Interface (JNI) is used to wrap message passing libraries written in other languages, and the bytecode which is generated for the front-end may be analyzed in order to completely determine the needs of the code which it wraps. This characterization allows the pre-configuration of a remote environment so as to be able to support execution. The usefulness of the portability gained by our approach is illustrated through examples showing the soft-installation of a process using an MPI computational substrate and the soft-installation of a process which requires a C-based communication library based upon the efficient multi-cast communication package, CCTL. The examples show that significant gains in performance can be achieved while allowing message passing execution to still exhibit high levels of portability.","PeriodicalId":199059,"journal":{"name":"Concurr. Pract. Exp.","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121211352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concurr. Pract. Exp.Pub Date : 2000-09-01DOI: 10.1002/1096-9128(200009)12:11%3C1093::AID-CPE522%3E3.0.CO;2-6
G. Thiruvathukal, P. Dickens, Shahzad Bhatti
{"title":"Java on networks of workstations (JavaNOW): a parallel computing framework inspired by Linda and the Message Passing Interface (MPI)","authors":"G. Thiruvathukal, P. Dickens, Shahzad Bhatti","doi":"10.1002/1096-9128(200009)12:11%3C1093::AID-CPE522%3E3.0.CO;2-6","DOIUrl":"https://doi.org/10.1002/1096-9128(200009)12:11%3C1093::AID-CPE522%3E3.0.CO;2-6","url":null,"abstract":"Networks of workstations are a dominant force in the distributed computing arena, due primarily to the excellent price/performance ratio of such systems when compared to traditionally massively parallel architectures. It is therefore critical to develop programming languages and environments that can potentially harness the raw computational power availab le on these systems. In this article, we present JavaNOW (Java on Networks of Workstations), a Java based framework for parallel programming on networks of workstations. It creates a virtual parallel machine similar to the MPI (Message Passing Interface) model, and provides distributed associative shared memory similar to Linda memory model but with a flexible set of primitive operations. JavaNOW provides a simple yet powerful framework for performing computation on networks of workstations. In addition to the Linda memory model, it provides for shared objects, implicit multithreading, implicit synchronization, object dataflow, and collective communications similar to those defined in MPI. JavaNOW is also a component of the Computational Neighborhood [63], a Java-enabled suite of services for desktop computational sharing. The intent of JavaNOW is to present an environment for parallel computing that is both expressive and reliable and ultimately can deliver good to excellent performance. As JavaNOW is a work in progress, this article emphasizes the expressive potential of the JavaNOW environment and presents preliminary performance results only.","PeriodicalId":199059,"journal":{"name":"Concurr. Pract. Exp.","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114647860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}