Programming Models for Massively Parallel Computers最新文献

筛选
英文 中文
Distributed memory implementation of elliptic partial differential equations in a dataparallel functional language 椭圆型偏微分方程在数据并行函数语言中的分布式内存实现
Programming Models for Massively Parallel Computers Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504352
H. Kuchen, H. Stoltze, I. Dimov, A. Karaivanova
{"title":"Distributed memory implementation of elliptic partial differential equations in a dataparallel functional language","authors":"H. Kuchen, H. Stoltze, I. Dimov, A. Karaivanova","doi":"10.1109/PMMPC.1995.504352","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504352","url":null,"abstract":"We show that the numerical solution of partial differential equations can be elegantly and efficiently addressed in a functional language. Two statistical numerical methods are considered. We discuss why current parallel imperative languages are difficult to use and why general (expression parallel) functional languages are not efficient enough. The key point of our approach is to offer \"unique\" arrays and some operations on them which allow to handle their elements in parallel, including operations which exchange the partitions of an array between the processors. These operations constitute a deadlock-free high-level way of communication.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121467761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Compiling SVM-Fortran for the Intel Paragon XP/S 为Intel Paragon XP/S编译SVM-Fortran
Programming Models for Massively Parallel Computers Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504341
R. Berrendorf, M. Gerndt
{"title":"Compiling SVM-Fortran for the Intel Paragon XP/S","authors":"R. Berrendorf, M. Gerndt","doi":"10.1109/PMMPC.1995.504341","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504341","url":null,"abstract":"SVM-Fortran is a language designed to program highly parallel systems with a global address space. A compiler for SVM-Fortran is described which generates code for parallel machines; our current target machine is the Intel Paragon XP/S with an SVM-extension called ASVM. Performance numbers are given for applications and compared to results obtained with corresponding HPF-versions.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"539 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124504058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interactive visualization of high-dimension iteration and data sets 高维迭代和数据集的交互式可视化
Programming Models for Massively Parallel Computers Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504358
Z.S. Chamski, G. A. Hedayat
{"title":"Interactive visualization of high-dimension iteration and data sets","authors":"Z.S. Chamski, G. A. Hedayat","doi":"10.1109/PMMPC.1995.504358","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504358","url":null,"abstract":"Many well-formalized program transformations rely on techniques derived from the linear algebra theory. In such transformations, program entities are represented using polyhedra, which are then transformed using linear or affine functions. However, reasoning within this abstract framework is made extremely difficult by high dimensionality of spaces used to represent complex program transformations and various entities in the resulting programs: data, sets, iteration domains, access functions etc. This difficulty can be alleviated, at least partly, by providing tools for interactive visualization and manipulation of polyhedra and integrating such tools into a programming environment. In this paper we explore the issues involved in designing an interactive visualization tool for high-dimensionality polyhedra, and discuss the possible research directions arising from our current experience.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123160998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The parallel Fortran family and a new perspective 并行Fortran家族和一个新的视角
Programming Models for Massively Parallel Computers Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504350
John Darlington, Yike Guo, Jin Yang
{"title":"The parallel Fortran family and a new perspective","authors":"John Darlington, Yike Guo, Jin Yang","doi":"10.1109/PMMPC.1995.504350","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504350","url":null,"abstract":"Various parallel Fortran languages have been developed over the years. The research work in creating this Parallel Fortran Family has made significant contributions to parallel programming language design and implementation. In this paper, various parallel Fortran languages are studied based on a uniform co-ordination approach towards parallel programming. That is, new language constructs in parallel Fortran systems are regarded as providing a co-ordination mechanism organising a set of single-threaded computations, coded in standard Fortran, into a parallel ensemble. Features of different parallel Fortran languages are studied by investigating their corresponding co-ordination models. A new perspective on designing a structured parallel Fortran system is proposed by using a generic structured co-ordination language, SCL, as the uniform means to organise parallel Fortran computation.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115226845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Term graph rewriting as a specification and implementation framework for concurrent object-oriented programming languages 术语图重写作为并发面向对象编程语言的规范和实现框架
Programming Models for Massively Parallel Computers Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504353
R. Banach, G. A. Papadopoulos
{"title":"Term graph rewriting as a specification and implementation framework for concurrent object-oriented programming languages","authors":"R. Banach, G. A. Papadopoulos","doi":"10.1109/PMMPC.1995.504353","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504353","url":null,"abstract":"The usefulness of the generalised computational model of Term Graph Rewriting Systems (TGRS) for designing and implementing concurrent object-oriented languages, and also for specifying and reasoning about the interaction between concurrency and object-orientation (such as concurrent synchronisation of methods or interference problems between concurrency and inheritance), is examined in this paper by mapping a state-of-the-art functional object-oriented language onto the MONSTR computational model, a restricted form of TGRS specifically designed to act as a point of reference in the design and implementation of declarative and semi-declarative programming languages especially suited for distributed architectures.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128770071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Deriving optimal data distributions for group parallel numerical algorithms 群并行数值算法的最优数据分布
Programming Models for Massively Parallel Computers Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504339
T. Rauber, G. Runger, R. Wilhelm
{"title":"Deriving optimal data distributions for group parallel numerical algorithms","authors":"T. Rauber, G. Runger, R. Wilhelm","doi":"10.1109/PMMPC.1995.504339","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504339","url":null,"abstract":"Numerical algorithms often exhibit potential parallelism caused by a coarse structure of submethods in addition to the medium grain parallelism of systems within submethods. We present a derivation methodology for parallel programs of numerical methods on distributed memory machines that exploits both levels of parallelism in a group-SPMD parallel computation model. The derivation process starts with a specification of the numerical method in a module structure of submethods, and results in a parallel frame program containing all implementation decisions of the parallel implementation. The implementation derivation includes scheduling of modules, assigning processors to modules and choosing data distributions for basic modules. The methodology eases parallel programming and supplies a formal basis for automatic support. An analysis model allows performance predictions for parallel frame programs. In this article we concentrate on the determination of optimal data distributions using a dynamic programming approach based on data distribution types and incomplete run-time formulas.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125077427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Parallel EARS [edge addition rewrite systems] 平行耳[边缘添加重写系统]
Programming Models for Massively Parallel Computers Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504359
U. Assmann
{"title":"Parallel EARS [edge addition rewrite systems]","authors":"U. Assmann","doi":"10.1109/PMMPC.1995.504359","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504359","url":null,"abstract":"In this paper we show how edge addition rewrite systems (EARS) can be evaluated in parallel. EARS are a simple variant of graph rewrite systems, which only add edges to graphs. Because EARS are equivalent to a subset of Datalog, they provide a programming model for rule-based applications. EARS terminate and are strongly confluent, which makes them perfectly apt for parallel execution. In this paper we present two parallel evaluation methods, order-domain partitioning and evaluation on carrier-graphs. EARS provide scalable parallelism because efficient sequential evaluation techniques also exist.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129116871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Provably correct vectorization of nested-parallel programs 可证明正确的向量化嵌套并行程序
Programming Models for Massively Parallel Computers Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504361
J. Riely, J. Prins, S. Iyer
{"title":"Provably correct vectorization of nested-parallel programs","authors":"J. Riely, J. Prins, S. Iyer","doi":"10.1109/PMMPC.1995.504361","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504361","url":null,"abstract":"The work/step framework provides a high-level cost model for nested data-parallel programming languages, allowing programmers to understand the efficiency of their codes without concern for the eventual mapping of tasks to processors. Vectorization, or flattening, is the key technique for compiling nested-parallel languages. This paper presents a formal study of vectorization, considering three low-level targets: the EREW, bounded-contention CREW, and CREW variants of the VRAM. For each, we describe a variant of the cost model and prove the correctness of vectorization for that model. The models impose different constraints on the set of programs and implementations that can be considered; we discuss these in detail.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133874529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A package for automatic parallelization of serial C-programs for distributed systems 一个用于分布式系统串行c程序自动并行化的包
Programming Models for Massively Parallel Computers Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504357
V. Beletsky, Alexander Bagaterenco, A. Chemeris
{"title":"A package for automatic parallelization of serial C-programs for distributed systems","authors":"V. Beletsky, Alexander Bagaterenco, A. Chemeris","doi":"10.1109/PMMPC.1995.504357","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504357","url":null,"abstract":"Problems arising due to run existent software in parallel computer systems are considered. The problem may be formulated as the serial programs should be analyzed first and then through modification of them are to be brought in to make them able to run in parallel computers. The problems that arise have been analyzed and ways to tackle them are given. The structure of programming package is given. It is substantiated that for most sequential programs the major share of time spent for their execution is accounted for by processing loops. Three loop parallelization methods have been selected for implementation of programs: method of coordinates, method of linear transformations, and modified method of linear-piece parallelization. The dependence graph construction principles are expounded and scheduling methods are enumerated.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":" 46","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114053310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Facilitating the development of portable parallel applications on distributed memory systems 促进分布式内存系统上可移植并行应用程序的开发
Programming Models for Massively Parallel Computers Pub Date : 1995-10-09 DOI: 10.1109/PMMPC.1995.504356
C. Voliotis, G. Manis, A. Thanos, P. Tsanakas, G. Papakonstantinou
{"title":"Facilitating the development of portable parallel applications on distributed memory systems","authors":"C. Voliotis, G. Manis, A. Thanos, P. Tsanakas, G. Papakonstantinou","doi":"10.1109/PMMPC.1995.504356","DOIUrl":"https://doi.org/10.1109/PMMPC.1995.504356","url":null,"abstract":"In this paper, two programming tools are presented, facilitating the development of portable parallel applications on distributed memory systems. The Orchid system is a software platform, i.e. a set of facilities for parallel programming. It consists of mechanisms for transparent message passing and a set of primitive functions to support the distributed shared memory programming model. In order to free the user from the tedius task of parallel programming, a new environment for logic programming is introduced: the Daffodil framework. Daffodil, implemented on top of Orchid, evaluates pure PROLOG programs exploiting the inherent AND/OR parallelism. Both systems have been implemented and evaluated on various platforms, since the layered structure of Orchid ensures portability only by re-engineering a small part of the code.","PeriodicalId":344246,"journal":{"name":"Programming Models for Massively Parallel Computers","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116536118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信