MPI中的一级通信

E. Demaine
{"title":"MPI中的一级通信","authors":"E. Demaine","doi":"10.1109/MPIDC.1996.534113","DOIUrl":null,"url":null,"abstract":"We compare three concurrent-programming languages based on message-passing: Concurrent ML (CML), Occam and MPI. The main advantage of the CML extension of Standard ML (SML) is that communication events are first-class just like normal program variables (e.g., integers), that is, they can be created at run-time, assigned to variables, and passed to and returned from functions. In addition, it provides dynamic process and channel creation. Occam, first designed for transputers, is based on a static model of process and channel creation. We examine how these limitations enforce severe restrictions on communication events, and how they affect the flexibility of Occam programs. The MPI (Message Passing Interface) standard provides a common way to access message-passing in C and Fortran. Although MPI was designed for parallel and distributed computation, it can also be viewed as a general concurrent-programming language. In particular most Occam features and several important facilities of CML can be implemented in MPI. For example, MPI-2 supports dynamic process and channel creation, and less general first-class communication events. We propose an extension to MPI which provides the CML choose, wrap, and guard combinators. This would make MPI a strong base for the flexible concurrency available in CML. Assuming that the modifications are incorporated into the standard and its implementations higher-order concurrency and its advantages will become more widespread.","PeriodicalId":432081,"journal":{"name":"Proceedings. Second MPI Developer's Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"First class communication in MPI\",\"authors\":\"E. Demaine\",\"doi\":\"10.1109/MPIDC.1996.534113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We compare three concurrent-programming languages based on message-passing: Concurrent ML (CML), Occam and MPI. The main advantage of the CML extension of Standard ML (SML) is that communication events are first-class just like normal program variables (e.g., integers), that is, they can be created at run-time, assigned to variables, and passed to and returned from functions. In addition, it provides dynamic process and channel creation. Occam, first designed for transputers, is based on a static model of process and channel creation. We examine how these limitations enforce severe restrictions on communication events, and how they affect the flexibility of Occam programs. The MPI (Message Passing Interface) standard provides a common way to access message-passing in C and Fortran. Although MPI was designed for parallel and distributed computation, it can also be viewed as a general concurrent-programming language. In particular most Occam features and several important facilities of CML can be implemented in MPI. For example, MPI-2 supports dynamic process and channel creation, and less general first-class communication events. We propose an extension to MPI which provides the CML choose, wrap, and guard combinators. This would make MPI a strong base for the flexible concurrency available in CML. Assuming that the modifications are incorporated into the standard and its implementations higher-order concurrency and its advantages will become more widespread.\",\"PeriodicalId\":432081,\"journal\":{\"name\":\"Proceedings. Second MPI Developer's Conference\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. Second MPI Developer's Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MPIDC.1996.534113\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Second MPI Developer's Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MPIDC.1996.534113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

我们比较了三种基于消息传递的并发编程语言:并发ML (CML)、Occam和MPI。标准ML (SML)的CML扩展的主要优点是,通信事件就像普通的程序变量(例如整数)一样是一级的,也就是说,它们可以在运行时创建,分配给变量,传递给函数并从函数返回。此外,它还提供动态流程和通道创建。Occam最初是为转发器设计的,它基于过程和信道创建的静态模型。我们将研究这些限制如何对通信事件施加严格的限制,以及它们如何影响Occam程序的灵活性。MPI(消息传递接口)标准提供了一种在C和Fortran中访问消息传递的通用方法。尽管MPI是为并行和分布式计算而设计的,但它也可以被视为一种通用的并发编程语言。特别是大多数Occam特性和CML的一些重要功能都可以在MPI中实现。例如,MPI-2支持动态进程和通道创建,以及不太通用的一等通信事件。我们提出对MPI的扩展,它提供了CML选择、包装和保护组合子。这将使MPI成为CML中灵活并发性的强大基础。假设将这些修改合并到标准中并实现高阶并发性及其优点将得到更广泛的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
First class communication in MPI
We compare three concurrent-programming languages based on message-passing: Concurrent ML (CML), Occam and MPI. The main advantage of the CML extension of Standard ML (SML) is that communication events are first-class just like normal program variables (e.g., integers), that is, they can be created at run-time, assigned to variables, and passed to and returned from functions. In addition, it provides dynamic process and channel creation. Occam, first designed for transputers, is based on a static model of process and channel creation. We examine how these limitations enforce severe restrictions on communication events, and how they affect the flexibility of Occam programs. The MPI (Message Passing Interface) standard provides a common way to access message-passing in C and Fortran. Although MPI was designed for parallel and distributed computation, it can also be viewed as a general concurrent-programming language. In particular most Occam features and several important facilities of CML can be implemented in MPI. For example, MPI-2 supports dynamic process and channel creation, and less general first-class communication events. We propose an extension to MPI which provides the CML choose, wrap, and guard combinators. This would make MPI a strong base for the flexible concurrency available in CML. Assuming that the modifications are incorporated into the standard and its implementations higher-order concurrency and its advantages will become more widespread.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信