Hans-Wolfgang Loidl, P. Trinder, K. Hammond, S. Junaidu, Richard G. Morgan, S. Jones
{"title":"Engineering parallel symbolic programs in GPH","authors":"Hans-Wolfgang Loidl, P. Trinder, K. Hammond, S. Junaidu, Richard G. Morgan, S. Jones","doi":"10.1002/(SICI)1096-9128(199910)11:12%3C701::AID-CPE443%3E3.0.CO;2-P","DOIUrl":null,"url":null,"abstract":"We investigate the claim that functional languages offer low-cost parallelism in the context of symbolic programs on modest parallel architectures. In our investigation we present the first comparative study of the construction of large applications in a parallel functional language, in our case in Glasgow Parallel Haskell (GPH). The applications cover a range of application areas, use several parallel programming paradigms, and are measured on two very different parallel architectures. \n \nOn the applications level the most significant result is that we are able to achieve modest wall-clock speedups (between factors of 2 and 10) over the optimised sequential versions for all but one of the programs. Speedups are obtained even for programs that were not written with the intention of being parallelised. These gains are achieved with a relatively small programmer-effort. One reason for the relative ease of parallelisation is the use of evaluation strategies, a new parallel programming technique that separates the algorithm from the co-ordination of parallel behaviour. \n \nOn the language level we show that the combination of lazy and parallel evaluation is useful for achieving a high level of abstraction. In particular we can describe top-level parallelism, and also preserve module abstraction by describing parallelism over the data structures provided at the module interface (‘data-oriented parallelism’). Furthermore, we find that the determinism of the language is helpful, as is the largely implicit nature of parallelism in GPH. Copyright © 1999 John Wiley & Sons, Ltd.","PeriodicalId":199059,"journal":{"name":"Concurr. Pract. Exp.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"44","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurr. Pract. Exp.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/(SICI)1096-9128(199910)11:12%3C701::AID-CPE443%3E3.0.CO;2-P","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 44
GPH中的工程并行符号程序
我们研究了函数式语言在适度并行架构上的符号程序环境中提供低成本并行性的说法。在我们的调查中,我们提出了在并行函数式语言中构建大型应用程序的第一个比较研究,在我们的案例中,格拉斯哥并行Haskell (GPH)。这些应用程序涵盖了一系列应用程序领域,使用了几种并行编程范例,并在两个非常不同的并行体系结构上进行了度量。在应用程序级别上,最重要的结果是,除了一个程序之外,我们能够在优化的顺序版本上实现适度的时钟加速(在2到10之间)。即使对于不打算并行化的程序,也可以获得加速。这些收益是通过相对较小的程序员努力实现的。并行化相对容易的一个原因是评估策略的使用,这是一种新的并行编程技术,它将算法与并行行为的协调分离开来。在语言层面上,我们展示了懒惰和并行计算的结合对于实现高层次的抽象是有用的。特别是我们可以描述顶层的并行性,并且通过描述模块接口提供的数据结构上的并行性(“面向数据的并行性”)来保持模块的抽象性。此外,我们发现语言的决定论是有帮助的,就像GPH中很大程度上隐含的并行性一样。版权所有©1999 John Wiley & Sons, Ltd
本文章由计算机程序翻译,如有差异,请以英文原文为准。