{"title":"MPI中的一级通信","authors":"E. Demaine","doi":"10.1109/MPIDC.1996.534113","DOIUrl":null,"url":null,"abstract":"We compare three concurrent-programming languages based on message-passing: Concurrent ML (CML), Occam and MPI. The main advantage of the CML extension of Standard ML (SML) is that communication events are first-class just like normal program variables (e.g., integers), that is, they can be created at run-time, assigned to variables, and passed to and returned from functions. In addition, it provides dynamic process and channel creation. Occam, first designed for transputers, is based on a static model of process and channel creation. We examine how these limitations enforce severe restrictions on communication events, and how they affect the flexibility of Occam programs. The MPI (Message Passing Interface) standard provides a common way to access message-passing in C and Fortran. Although MPI was designed for parallel and distributed computation, it can also be viewed as a general concurrent-programming language. In particular most Occam features and several important facilities of CML can be implemented in MPI. For example, MPI-2 supports dynamic process and channel creation, and less general first-class communication events. We propose an extension to MPI which provides the CML choose, wrap, and guard combinators. This would make MPI a strong base for the flexible concurrency available in CML. Assuming that the modifications are incorporated into the standard and its implementations higher-order concurrency and its advantages will become more widespread.","PeriodicalId":432081,"journal":{"name":"Proceedings. Second MPI Developer's Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"First class communication in MPI\",\"authors\":\"E. Demaine\",\"doi\":\"10.1109/MPIDC.1996.534113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We compare three concurrent-programming languages based on message-passing: Concurrent ML (CML), Occam and MPI. The main advantage of the CML extension of Standard ML (SML) is that communication events are first-class just like normal program variables (e.g., integers), that is, they can be created at run-time, assigned to variables, and passed to and returned from functions. In addition, it provides dynamic process and channel creation. Occam, first designed for transputers, is based on a static model of process and channel creation. We examine how these limitations enforce severe restrictions on communication events, and how they affect the flexibility of Occam programs. The MPI (Message Passing Interface) standard provides a common way to access message-passing in C and Fortran. Although MPI was designed for parallel and distributed computation, it can also be viewed as a general concurrent-programming language. In particular most Occam features and several important facilities of CML can be implemented in MPI. For example, MPI-2 supports dynamic process and channel creation, and less general first-class communication events. We propose an extension to MPI which provides the CML choose, wrap, and guard combinators. This would make MPI a strong base for the flexible concurrency available in CML. Assuming that the modifications are incorporated into the standard and its implementations higher-order concurrency and its advantages will become more widespread.\",\"PeriodicalId\":432081,\"journal\":{\"name\":\"Proceedings. Second MPI Developer's Conference\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. Second MPI Developer's Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MPIDC.1996.534113\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Second MPI Developer's Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MPIDC.1996.534113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We compare three concurrent-programming languages based on message-passing: Concurrent ML (CML), Occam and MPI. The main advantage of the CML extension of Standard ML (SML) is that communication events are first-class just like normal program variables (e.g., integers), that is, they can be created at run-time, assigned to variables, and passed to and returned from functions. In addition, it provides dynamic process and channel creation. Occam, first designed for transputers, is based on a static model of process and channel creation. We examine how these limitations enforce severe restrictions on communication events, and how they affect the flexibility of Occam programs. The MPI (Message Passing Interface) standard provides a common way to access message-passing in C and Fortran. Although MPI was designed for parallel and distributed computation, it can also be viewed as a general concurrent-programming language. In particular most Occam features and several important facilities of CML can be implemented in MPI. For example, MPI-2 supports dynamic process and channel creation, and less general first-class communication events. We propose an extension to MPI which provides the CML choose, wrap, and guard combinators. This would make MPI a strong base for the flexible concurrency available in CML. Assuming that the modifications are incorporated into the standard and its implementations higher-order concurrency and its advantages will become more widespread.