{"title":"OpenMPD:分布式存储系统的基于指令的数据并行语言扩展","authors":"Jinpil Lee, M. Sato, T. Boku","doi":"10.1109/ICPP-W.2008.28","DOIUrl":null,"url":null,"abstract":"Open MPD is a language extension for programming on distributed memory systems that helps users by having minimal and simple notations. Although MPI is the de facto standard for parallel programming on distributed memory systems, writing MPI programs is often a time-consuming and complicated process. Open MPD supports typical parallelization-based on the data parallel paradigm and work sharing, and enables parallelizing the original sequential code using minimal modification with simple directives, like Open MP. And for flexibility, it allows to combine with explicit MPI coding on parallelization with Open MP for more complicated parallel codes. Experimental results of our implementation show that Open MPD achieves three to eight times speed-up on a PC cluster with eight processors given a small modification to the original sequential code.","PeriodicalId":231042,"journal":{"name":"2008 International Conference on Parallel Processing - Workshops","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"OpenMPD: A Directive-Based Data Parallel Language Extension for Distributed Memory Systems\",\"authors\":\"Jinpil Lee, M. Sato, T. Boku\",\"doi\":\"10.1109/ICPP-W.2008.28\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Open MPD is a language extension for programming on distributed memory systems that helps users by having minimal and simple notations. Although MPI is the de facto standard for parallel programming on distributed memory systems, writing MPI programs is often a time-consuming and complicated process. Open MPD supports typical parallelization-based on the data parallel paradigm and work sharing, and enables parallelizing the original sequential code using minimal modification with simple directives, like Open MP. And for flexibility, it allows to combine with explicit MPI coding on parallelization with Open MP for more complicated parallel codes. Experimental results of our implementation show that Open MPD achieves three to eight times speed-up on a PC cluster with eight processors given a small modification to the original sequential code.\",\"PeriodicalId\":231042,\"journal\":{\"name\":\"2008 International Conference on Parallel Processing - Workshops\",\"volume\":\"85 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 International Conference on Parallel Processing - Workshops\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPP-W.2008.28\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 International Conference on Parallel Processing - Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPP-W.2008.28","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
摘要
Open MPD是一种用于在分布式内存系统上编程的语言扩展,它通过最小和简单的符号来帮助用户。尽管MPI是分布式内存系统上并行编程的事实标准,但编写MPI程序通常是一个耗时且复杂的过程。Open MPD支持基于数据并行范式和工作共享的典型并行化,并支持通过简单指令(如Open MP)进行最小修改来并行化原始顺序代码。为了提高灵活性,它允许将显式MPI编码与Open MP的并行化相结合,以实现更复杂的并行代码。实验结果表明,在具有8个处理器的PC集群上,通过对原始顺序代码进行少量修改,Open MPD可以实现3到8倍的加速。
OpenMPD: A Directive-Based Data Parallel Language Extension for Distributed Memory Systems
Open MPD is a language extension for programming on distributed memory systems that helps users by having minimal and simple notations. Although MPI is the de facto standard for parallel programming on distributed memory systems, writing MPI programs is often a time-consuming and complicated process. Open MPD supports typical parallelization-based on the data parallel paradigm and work sharing, and enables parallelizing the original sequential code using minimal modification with simple directives, like Open MP. And for flexibility, it allows to combine with explicit MPI coding on parallelization with Open MP for more complicated parallel codes. Experimental results of our implementation show that Open MPD achieves three to eight times speed-up on a PC cluster with eight processors given a small modification to the original sequential code.