Proceedings of Workshop on Programming Models for Massively Parallel Computers最新文献

筛选
英文 中文
Structured parallel programming 结构化并行程序设计
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315543
J. Darlington, M. Ghanem, H. To
{"title":"Structured parallel programming","authors":"J. Darlington, M. Ghanem, H. To","doi":"10.1109/PMMP.1993.315543","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315543","url":null,"abstract":"Parallel programming is a difficult task involving many complex issues such as resource allocation, and process coordination. We propose a solution to this problem based on the use of a repertoire of parallel algorithmic forms, known as skeletons. The use of skeletons enables the meaning of a parallel program to be separated from its behaviour. Central to this methodology is the use of transformations and performance models. Transformations provide portability and implementation choices, whilst performance models guide the choices by providing predictions of execution time. We describe the methodology and investigate the use and construction of performance models by studying an example.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"9 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114104290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
Compiling data parallel programs to message passing programs for massively parallel MIMD systems 将数据并行程序编译为大规模并行MIMD系统的消息传递程序
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315550
T. Brandes
{"title":"Compiling data parallel programs to message passing programs for massively parallel MIMD systems","authors":"T. Brandes","doi":"10.1109/PMMP.1993.315550","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315550","url":null,"abstract":"The currently dominant message-passing programming paradigm for MIMD systems is difficult to use and error prone. One approach that avoids explicit communication is the data-parallel programming model. This model stands for a single thread of control, global name space, and loosely synchronous parallel computation. It is easy to use and data-parallel programs usually scale very well. Based on the experiences of an existing compilation system for data-parallel Fortran programs it is shown how to design such a compilation system and which optimization techniques are required to make data-parallel programs competitive with their handwritten counterparts using message-passing.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123715505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
The Modula-2* environment for parallel programming 用于并行编程的Modula-2*环境
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315555
S.U. Hanssgen, E. A. Heinz, P. Lukowicz, M. Philippsen, W. Tichy
{"title":"The Modula-2* environment for parallel programming","authors":"S.U. Hanssgen, E. A. Heinz, P. Lukowicz, M. Philippsen, W. Tichy","doi":"10.1109/PMMP.1993.315555","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315555","url":null,"abstract":"Presents a portable parallel programming environment for Modula-2*, an explicitly parallel machine-independent extension of Modula-2. Modula-2* offers synchronous and asynchronous parallelism, a global single address space, and automatic data and process distribution. The Modula-2* system consists of a compiler, a debugger, a cross-architecture make, graphical X Windows control panel, run-time systems for different machines, and sets of scalable parallel libraries. The existing implementation targets the MasPar MP series of massively parallel processors (SIMD), the KSR-1 parallel computer (MIMD), heterogeneous LANs of workstations (MIMD), and single workstations (SISD). We describe the important components of the Modula-2* environment, and discuss selected implementation issues. We focus on how we achieve a high degree of portability for our system, while at the same time ensuring efficiency.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128510152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Modeling parallel computers as memory hierarchies 将并行计算机建模为内存层次结构
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315548
B. Alpern, L. Carter, J. Ferrante
{"title":"Modeling parallel computers as memory hierarchies","authors":"B. Alpern, L. Carter, J. Ferrante","doi":"10.1109/PMMP.1993.315548","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315548","url":null,"abstract":"A parameterized generic model that captures the features of diverse computer architectures would facilitate the development of portable programs. Specific models appropriate to particular computers are obtained by specifying parameters of the generic model. A generic model should be simple, and for each machine that it is intended to represent, it should have a reasonably accurate specific model. The Parallel Memory Hierarchy (PMH) model of computation uses a single mechanism to model the costs of both interprocessor communication and memory hierarchy traffic. A computer is modeled as a tree of memory modules with processors at the leaves. All data movement takes the form of block transfers between children and their parents. The paper assesses the strengths and weaknesses of the PMH model as a generic model.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126001293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
MANIFOLD: a programming model for massive parallelism 一个大规模并行的编程模型
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315544
F. Arbab, É. Rutten
{"title":"MANIFOLD: a programming model for massive parallelism","authors":"F. Arbab, É. Rutten","doi":"10.1109/PMMP.1993.315544","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315544","url":null,"abstract":"MANIFOLD is a coordination language for orchestration of the communications among independent, cooperating processes in a massively parallel or distributed application. The fundamental principle underlying MANIFOLD is the complete separation of computation from communication. This means that in MANIFOLD: computation processes know nothing about their own communication with other processes; and coordinator processes manage the communications among a set of processes, but know nothing about the computation they carry out. This principle leads to more flexible software made out of more re-usable components, and supports open systems. MANIFOLD is a new programming language based on a number of novel concepts. MANIFOLD is about concurrency of cooperation as opposed to the concern of the classical work on concurrency, that deals with concurrency of competition. In order to better understand the fundamentals of this language and its underlying model, we focus on the kernel of a simple sub-language of MANIFOLD, called MINIFOLD.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"732 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116069317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Performance analysis of distributed applications by suitability functions 通过适用性函数分析分布式应用程序的性能
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315540
V. Getov, R. Hockney, A. Hey
{"title":"Performance analysis of distributed applications by suitability functions","authors":"V. Getov, R. Hockney, A. Hey","doi":"10.1109/PMMP.1993.315540","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315540","url":null,"abstract":"A simple programming model of distributed-memory message-passing computer systems is first applied to describe the couple architecture/application by two sets of parameters. The node timing formula is then derived on the basis of scalar, vector and communication components. A set of suitability functions, extracted from the performance formulae, are defined. These functions are applied as an example to the performance analysis of the 1-dimensional FFT benchmark from the GENESIS benchmark suite. The suitability functions could also be useful for comparative performance analysis of both existing distributed-memory systems and new architectures under development.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123307473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Parallel programming models and their interdependence with parallel architectures 并行编程模型及其与并行体系结构的相互依存关系
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315560
W. Giloi
{"title":"Parallel programming models and their interdependence with parallel architectures","authors":"W. Giloi","doi":"10.1109/PMMP.1993.315560","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315560","url":null,"abstract":"Because of its superior performance and cost-effectiveness, parallel computing will become the future standard, provided we have the appropriate programming models, tools and compilers needed to make parallel computers widely usable. The dominating programming style is procedural, given in the form of either the memory sharing or the message-passing paradigm. The advantages and disadvantages of these models and their supporting architectures are discussed, as well as the tools by which parallel programming is made machine-independent. Further improvements can be expected from very high level coordination languages. A general breakthrough of parallel computing, however, will only come with the parallelizing compiler that enable the user to program applications in the conventional sequential style. The state-of-the-art of parallelizing compilers is outlined, and it is shown how they will be supported by higher-level programming models and multi-threaded architectures.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"10 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116188441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Massively parallel programming using object parallelism 使用对象并行的大规模并行编程
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315545
W. Joosen, S. Bijnens, P. Verbaeten
{"title":"Massively parallel programming using object parallelism","authors":"W. Joosen, S. Bijnens, P. Verbaeten","doi":"10.1109/PMMP.1993.315545","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315545","url":null,"abstract":"We introduce the concept of object parallelism. Object parallelism offers a unified model in comparison with traditional parallelisation techniques such as data parallelism and algorithmic parallelism. In addition, two fundamental advantages of the object-oriented approach are exploited. First, the abstraction level of object parallelism is application-oriented, ie., it hides the details of the underlying parallel architecture. Thus, the portability of parallel applications is inherent and program development can occur on monoprocessor systems. Secondly, the concept of specialisation (through inheritance) enables the integration of the given application code with advanced run time support for load balancing and fault tolerance.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134010971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
PROMOTER: an application-oriented programming model for massive parallelism 大规模并行的面向应用程序的编程模型
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315539
W. Giloi, A. Schramm
{"title":"PROMOTER: an application-oriented programming model for massive parallelism","authors":"W. Giloi, A. Schramm","doi":"10.1109/PMMP.1993.315539","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315539","url":null,"abstract":"The article deals with rationale and concepts of a programming model for massive parallelism. We mention the basic properties of massively parallel applications and develop a programming model for data parallelism on distributed-memory computers. Its key features are a suitable combination of homogeneity and heterogeneity aspects, a unified representation of data point configuration and interconnection schemes by explicit virtual data topologies, and various synchronization schemes and nondeterminisms. The outline of the linguistic representation and the abstract executional model are given.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134014961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Interprocedural heap analysis for parallelizing imperative programs 并行命令式程序的过程间堆分析
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315553
U. Assman, M. Weinhardt
{"title":"Interprocedural heap analysis for parallelizing imperative programs","authors":"U. Assman, M. Weinhardt","doi":"10.1109/PMMP.1993.315553","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315553","url":null,"abstract":"The parallelization of imperative programs working on pointer data structures is possible by using extensive heap analysis. Therefore, we consider a new interprocedural version of the heap analysis algorithm with summary nodes from Chase, Wegman and Zadeck (1990). Our analysis handles arbitrary call graph inclusive recursion, works on a realistic low-level intermediate language, and uses a modified propagation method to correct an inaccuracy of the original algorithm. Furthermore, we discuss how loops and recursions over heap data structures can be parallelized based on the analysis information.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133909156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信