{"title":"Why So Many?: A Brief Tour of Haskell DSLs for Parallel Programming","authors":"Patrick Maier","doi":"10.1145/2889420.2893172","DOIUrl":null,"url":null,"abstract":"The proliferation of inexpensive parallel compute devices, from multicore CPUs and GPUs to manycore co-processors and FPGAs, has increased the demand for parallel programming languages. Yet, unlike in the 1970s and 80s, this has not spawned a flurry of entirely new programming languages. Rather, it has driven the development of parallel extensions and libraries for existing languages. Functional programming has been hailed as an answer to the parallel software crisis, even before that crisis erupted. Absence of side effects and mutable state, the argument goes, eliminates many of the thorny issues (like locking and data races) that plague developers in mainstream imperative programming languages, while also enabling automated scheduling of parallelism. This leaves the functional programmer to concentrate solely on introducing parallelism into their application. Voilà, the dream of declarative parallel programming come true. The popular functional language Haskell should have been wellplaced to deliver on these promises. In fact, the GUM system [17, 16] did just that in the 1990s, providing an elegant declarative task parallel programming model and a runtime system for transparently scheduling tasks across the network. Proper compiler support for this programming model on multicores, however, did not arrive until 2009 [12, 2, 10], half a decade after multicore CPUs became ubiquitous. And when multicore support went mainstream with GHC release 6.12, it became apparent that the programming model, while elegant, is intricately interwoven with Haskell’s non-strict evaluation order. This makes predicting the behaviour of a parallel Haskell program more difficult than it would be in a strict language, and constitutes a major hurdle for parallel programmers, particularly for Haskell novices. Moreover, by 2009 multicores weren’t the only parallel game anymore; GPUs had the buzz. Yet, the task parallel programming model that works well on multicore CPUs is not suited for offloading computations to data parallel devices like GPUs.","PeriodicalId":321825,"journal":{"name":"Proceedings of the 1st International Workshop on Real World Domain Specific Languages","volume":"512 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Real World Domain Specific Languages","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2889420.2893172","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The proliferation of inexpensive parallel compute devices, from multicore CPUs and GPUs to manycore co-processors and FPGAs, has increased the demand for parallel programming languages. Yet, unlike in the 1970s and 80s, this has not spawned a flurry of entirely new programming languages. Rather, it has driven the development of parallel extensions and libraries for existing languages. Functional programming has been hailed as an answer to the parallel software crisis, even before that crisis erupted. Absence of side effects and mutable state, the argument goes, eliminates many of the thorny issues (like locking and data races) that plague developers in mainstream imperative programming languages, while also enabling automated scheduling of parallelism. This leaves the functional programmer to concentrate solely on introducing parallelism into their application. Voilà, the dream of declarative parallel programming come true. The popular functional language Haskell should have been wellplaced to deliver on these promises. In fact, the GUM system [17, 16] did just that in the 1990s, providing an elegant declarative task parallel programming model and a runtime system for transparently scheduling tasks across the network. Proper compiler support for this programming model on multicores, however, did not arrive until 2009 [12, 2, 10], half a decade after multicore CPUs became ubiquitous. And when multicore support went mainstream with GHC release 6.12, it became apparent that the programming model, while elegant, is intricately interwoven with Haskell’s non-strict evaluation order. This makes predicting the behaviour of a parallel Haskell program more difficult than it would be in a strict language, and constitutes a major hurdle for parallel programmers, particularly for Haskell novices. Moreover, by 2009 multicores weren’t the only parallel game anymore; GPUs had the buzz. Yet, the task parallel programming model that works well on multicore CPUs is not suited for offloading computations to data parallel devices like GPUs.