{"title":"Towards a higher level of abstraction in parallel programming","authors":"D. B. Skillicorn","doi":"10.1109/PMMPC.1995.504344","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504344","url":null,"abstract":"There are substantial problems with exploiting parallelism, particularly massive parallelism. One attempt to solve these problems is general-purpose parallelism, which searches for models that are abstract enough to be useful for software development, but that map well enough to realistic architectures that they deliver performance. We show how the skeletons model is a suitable general-purpose model for massive parallelism, and show its power by illustrating a new algorithm for search in structured text. The algorithm is sufficiently complex that it would have been hard to find without the theory underlying the Bird-Meertens formalism. The example also demonstrates the opportunities for parallelism in new, non-scientific and non-numeric applications.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126146907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proving data-parallel programs correct: the proof outlines approach","authors":"L. Bougé, D. Cachera","doi":"10.1109/PMMPC.1995.504360","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504360","url":null,"abstract":"We present a proof outline generation system for a simple data-parallel kernel language called /spl Lscr/. Proof outlines for /spl Lscr/ are very similar to those for usual scalar-like languages. In particular, they can be mechanically generated backwards from the final post-assertion of the program. They appear thus as a valuable basis to implement a validation assistance tool for data-parallel programming. The equivalence between proof outlines and the sound and complete Hoare logic defined for /spl Lscr/ in previous papers is also discussed.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116076988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A-NETL: a language for massively parallel object-oriented computing","authors":"T. Baba, T. Yoshinaga","doi":"10.1109/PMMPC.1995.504346","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504346","url":null,"abstract":"A-NETL is a parallel object-oriented language intended to be used for managing small to massive parallelism with medium grain size. Its design goals are to support various styles of message passing, to treat data parallel operations at the same cost as programming languages of the SIMD type, to provide several synchronization facilities for autonomous control, and to provide information for the efficient allocation of objects to nodes. Starting with these design principles, this paper then goes on, to describe the syntax and semantics of the language and the major implementation issues, including the reduction of message communication cost, efficient implementation of statically and dynamically created massive objects, the realization of synchronization schemes, the object-to-node allocation scheme to minimize communication cost, and logical-time-based debugging for asynchronous operations.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"42 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132375019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A scalable tuple space model for structured parallel programming","authors":"Antonio Corradi, F. Zambonelli, L. Leonardi","doi":"10.1109/PMMPC.1995.504338","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504338","url":null,"abstract":"The paper proposes and analyses a scalable model of an associative distributed shared memory for massively parallel architectures. The proposed model is hierarchical and fits the modern style of structured parallel programming. If parallel applications are composed of a set of modules with a well-defined scope of interaction, the proposed model can induce a memory access latency time that only logarithmically increases with the number of nodes. Experimental results show the effectiveness of the model with a transputer-based implementation.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130142955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Space-limited procedures: a methodology for portable high-performance","authors":"B. Alpern, L. Carter, J. Ferrante","doi":"10.1109/PMMPC.1995.504336","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504336","url":null,"abstract":"This paper presents the generic program approach to achieving portable high-performance. This approach has three phases. In the first, a generic program, defining a family of semantically-equivalent program variants, is written. In the second, the generic program as specialized to the variant that performs best on an abstract model of the target computer. In the third, this variant is translated to run on the target computer. The Parallel Memory Hierarchy (PMH) generic model is used to define the abstract models of target computers. Using this approach, a spectrum of solutions is possible. At one end of the spectrum, a simple generic program can be written, with roughly the same difficulty as writing a sequential program, that can be tuned automatically to achieve reasonably good performance on a wide variety of computers. This solution can be refined to give better performance. At the labor-intensive end of the spectrum, an application can be tuned so that it achieves the best possible performance on each of a collection of computers.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122943895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic generation of parallel algorithms","authors":"T. Beth","doi":"10.1109/PMMPC.1995.504349","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504349","url":null,"abstract":"In this talk we present the intrinsic connection between modelling the suitable data type by algebraic specification and the correct and efficient implementation of high-speed parallel algorithms in hardware or software. The design tool IDEAS developed at the author's institution representing the first such instrument for automatic parallel algorithm generation is described.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121415360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structuration of the ALPHA language","authors":"F. D. Dinechin, P. Quinton, T. Risset","doi":"10.1109/PMMPC.1995.504337","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504337","url":null,"abstract":"This paper presents extensions to ALPHA, a language based upon the formalism of affine recurrence equations (AREs). These extensions address the need for parametric and structured systems of such AREs. Similar to, but more general than the map operator of classical functional languages, the ALPHA structured techniques provide a dense and powerful description of complex systems referencing each other. Such structured systems of AREs may be interpreted as (or translated into) sequential function calls, hierarchical hardware description, or any SIMD flavour of structured programming. With the help of examples, we give an overview of these techniques, and their substitution semantics based on the homomorphic extension of convex polyhedra and affine functions.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129335341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The refinement of high-level parallel algorithm specifications","authors":"S. F. Hummel, S. Talla, J. Brennan","doi":"10.1109/PMMPC.1995.504347","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504347","url":null,"abstract":"PSETL is a prototyping language for developing efficient numeric code for massively parallel machines. PSETL enables parallel algorithms to be concisely specified at a very high level, and successively refined into lower level architecture-specific code. It includes a rich variety of parallel loops over sets, bags, and tuples, and a hierarchy of communication mechanisms, ranging from atomic assignments to reductions and scans on collections. We illustrate the parallel features of PSETL and the refinement process using an N-body simulation code as a case study. The high-level code, which is only a few pages long, is refined for execution on shared and disjoint address-space MIMD machines.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116784837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Realising a concurrent object-based programming model on parallel virtual shared memory architectures","authors":"Michael Fisher, J. Keane","doi":"10.1109/PMMPC.1995.504345","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504345","url":null,"abstract":"In this paper, we investigate the suitability of parallel architectures for the realisation of a novel object-based computational model encapsulated within programming languages such as ConcurrentMeteteM. This model incorporates objects, groups, broadcast message-passing and asynchronous execution. As such it provides a high-level architecture-independent representation for a variety of concurrent systems. The class of parallel architectures which we consider are logically shared but physically distributed memory systems.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125844351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Loop versus data scheduling: models, language and application for SVM","authors":"M. O’Boyle, J. M. Bull","doi":"10.1109/PMMPC.1995.504342","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504342","url":null,"abstract":"In this paper we show that, under different circumstances, data scheduling and loop scheduling are both useful models for parallel programs executing on shared virtual memory (SVM) systems. We therefore propose a unified programming model that permits both types of scheduling. We show that, given affine array references, a program segment which is parallel under loop scheduling can always be transformed to make it parallel under data scheduling and vice-versa, and hence that the two types of scheduling are equally powerful at exploiting parallelism. We review existing Fortran dialects for SVM and propose compiler directives that allow program segments to be data scheduled.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123192738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}