Proceedings of Workshop on Programming Models for Massively Parallel Computers最新文献

筛选
英文 中文
Reduced interprocessor-communication architecture for supporting programming models 简化了支持编程模型的处理器间通信架构
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315546
S. Sakai, K. Okamoto, Y. Kodama, M. Sato
{"title":"Reduced interprocessor-communication architecture for supporting programming models","authors":"S. Sakai, K. Okamoto, Y. Kodama, M. Sato","doi":"10.1109/PMMP.1993.315546","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315546","url":null,"abstract":"The paper presents an execution model and a processor architecture for general purpose massively parallel computers. To construct an efficient massively parallel computer: the execution model should be natural enough to map an actual problem structure into a processor architecture; each processor should have efficient and simple communication structure; and computation and communication should be tightly coupled and their operation should be highly overlapped. To meet these, we obtain a simplified architecture with a Continuation Driven Execution Model. We call this architecture RICA. RICA consists of a simplified message handling pipeline, a continuation-driven thread invocation mechanism, a RISC core for instruction execution, a message generation pipeline which can send messages asynchronously with other operations, and a thread switching mechanism with little overhead, all of which are fused in a simple architecture. Next, we state how RICA realizes parallel primitives of programming models and how efficiently it does. The primitives examined are-shared memory primitives, message passing primitives and barriers.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116321857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The DSPL programming environment DSPL编程环境
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315556
A. Mitschele-Thiel
{"title":"The DSPL programming environment","authors":"A. Mitschele-Thiel","doi":"10.1109/PMMP.1993.315556","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315556","url":null,"abstract":"Gives an overview on the principle concepts employed in the DSPL (Data Stream Processing Language) programming environment, an integrated approach to automate system design and implementation of parallel applications. The programming environment consists of a programming language and the following set of integrated tools: (1) The modeling tool automatically derives a software model from the given application program. (2) The model based optimization tool uses the software model to compute such design decisions as network topology, task granularity, task assignment and task execution order. (3) Finally, the compiler/optimizer transforms the application program into executable code for the chosen processor network, reflecting the design decisions.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"s3-26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130110766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Parallel symbolic processing-can it be done? 并行符号处理——能做到吗?
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315558
A. Sodan
{"title":"Parallel symbolic processing-can it be done?","authors":"A. Sodan","doi":"10.1109/PMMP.1993.315558","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315558","url":null,"abstract":"My principle answer is: yes, but it depends. Parallelization of symbolic applications is possible, but only for certain classes of applications. Distributed memory may prevent parallelization in some cases where the relation of computation and communication overhead becomes too high, but also may be an advantage when applications require much garbage collection, which can then be done in a distributed way. There are also some applications which have a higher degree of parallelism than can be supported by shared memory, and so are candidates for profiting by massively parallel architectures.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131400358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the implementation of virtual shared memory 关于虚拟共享内存的实现
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315542
W. Zimmermann, H. Kumm
{"title":"On the implementation of virtual shared memory","authors":"W. Zimmermann, H. Kumm","doi":"10.1109/PMMP.1993.315542","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315542","url":null,"abstract":"The field of parallel algorithms demonstrated that a machine model with virtual shared memory is easy to program. Most efforts in this field have been achieved on the PRAM-model. Theoretical results show that a PRAM can be simulated optimally on an interconnection network. We discuss implementations of some of these PRAM simulations and discuss their performance.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126444563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Virtual shared memory-based support for novel (parallel) programming paradigms 基于虚拟共享内存的新型(并行)编程范例支持
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315552
J. Keane, M. Xu
{"title":"Virtual shared memory-based support for novel (parallel) programming paradigms","authors":"J. Keane, M. Xu","doi":"10.1109/PMMP.1993.315552","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315552","url":null,"abstract":"Discusses the implementation of novel programming paradigms on virtual shared memory (VSM) parallel architectures. A wide spectrum of paradigms (data-parallel, functional and logic languages) have been investigated in order to achieve, within the context of VSM parallel architectures, a better understanding of the underlying support mechanisms for the paradigms and to identify commonality amongst the different mechanisms. An overview of VSM is given in the context of a commercially available VSM machine: a KSR-1. The correspondence between the features of the high level languages and the VSM features which assist efficient implementation are presented. Case studies are discussed as concrete examples of the issues involved.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"40 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113999562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Overall design of Pandore II: an environment for high performance C programming on DMPCs Pandore II的总体设计:一个在dmpc上进行高性能C编程的环境
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315557
F. André, Jean-Louis Pazat
{"title":"Overall design of Pandore II: an environment for high performance C programming on DMPCs","authors":"F. André, Jean-Louis Pazat","doi":"10.1109/PMMP.1993.315557","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315557","url":null,"abstract":"Pandore II is an environment designed for parallel execution of imperative sequential programs on distributed memory parallel computers (DMPCs). It comprises a compiler, libraries for different target distributed computers and execution analysis tools. No specific knowledge of the target machine is required of the user: only the specification of data decomposition is left to his duty. The purpose of the paper is to present the overall design of the Pandore II environment. The high performance C input language is described and the main principles of the compilation and optimization techniques are presented. An example is used along the paper to illustrate the development process from a sequential C program with the Pandore II environment.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131398612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An experimental parallelizing systolic compiler for regular programs 一个用于常规程序的实验性并行收缩编译器
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315551
F. Wichmann
{"title":"An experimental parallelizing systolic compiler for regular programs","authors":"F. Wichmann","doi":"10.1109/PMMP.1993.315551","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315551","url":null,"abstract":"Systolic transformation techniques are used for parallelization of regular loop programs. After a short introduction to systolic transformation, an experimental compiler system is presented that generates parallel C code by applying different transformation methods. This system is designed as a basis for development towards a systolic compiler generating efficient fine-grained parallel code for regular programs or program parts.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133221673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Beyond the data parallel paradigm: issues and options 超越数据并行范式:问题和选项
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315541
G. Gao, Vivek Sarkar, L. A. Vazquez
{"title":"Beyond the data parallel paradigm: issues and options","authors":"G. Gao, Vivek Sarkar, L. A. Vazquez","doi":"10.1109/PMMP.1993.315541","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315541","url":null,"abstract":"Currently, the predominant approach in compiling a program for parallel execution on a distributed memory multiprocessor is driven by the data parallel paradigm, in which user-specified data mappings are used to derive computation mappings via ad hoc rules such as owner-computes. We explore a more general approach which is driven by the selection of computation mappings from the program dependence constraints, and by the selection of dynamic data mappings from the localization constraints in different computation phases of the program. We state the optimization problems addressed by this approach and outline the solution methods that can be used. We believe that this approach provides promising solutions beyond what can be achieved by the data parallel paradigm. The paper outlines the general program model assumed for this work, states the optimization problems addressed by the approach and presents solutions to these problems.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115145922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A programming model for reconfigurable mesh based parallel computers 基于可重构网格的并行计算机编程模型
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315547
M. Maresca, P. Baglietto
{"title":"A programming model for reconfigurable mesh based parallel computers","authors":"M. Maresca, P. Baglietto","doi":"10.1109/PMMP.1993.315547","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315547","url":null,"abstract":"The paper describes a high level programming model for reconfigurable mesh architectures. We analyze the engineering and technological issues of the implementation of reconfigurable mesh architectures and define an abstract architecture, called polymorphic processor array. We define both a computation model and a programming model for polymorphic processor arrays and design a parallel programming language called Polymorphic Parallel C based on this programming model, for which we have implemented a compiler and a simulator. We have used such tools to validate a number of PPA algorithms and to estimate the performance of the corresponding programs.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125812407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Structuring data parallelism using categorical data types 使用分类数据类型构建数据并行性
Proceedings of Workshop on Programming Models for Massively Parallel Computers Pub Date : 1993-09-20 DOI: 10.1109/PMMP.1993.315549
D. Skillicorn
{"title":"Structuring data parallelism using categorical data types","authors":"D. Skillicorn","doi":"10.1109/PMMP.1993.315549","DOIUrl":"https://doi.org/10.1109/PMMP.1993.315549","url":null,"abstract":"Data parallelism is a powerful approach to parallel computation, particularly when it is used with complex data types. Categorical data types are extensions of abstract data types that structure computations in a way that is useful for parallel implementation. In particular, they decompose the search for good algorithms on a data type into subproblems, all homomorphisms can be implemented by a single recursive, and often parallel, schema, and they are equipped with an equational system that can be used for software development by transformation.<<ETX>>","PeriodicalId":220365,"journal":{"name":"Proceedings of Workshop on Programming Models for Massively Parallel Computers","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131171088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信