{"title":"GPU上的多线程编程:计算机代数的指针和提示","authors":"M. M. Maza","doi":"10.1145/3115936.3115939","DOIUrl":null,"url":null,"abstract":"It is well-known that the advent of hardware acceleration technologies (multicore processors, graphics processing units, field programmable gate arrays) provide vast opportunities for innovation in computing. In particular, GPUs combined with low-level heterogeneous programming models, such as CUDA (the Compute Unified Device Architecture, see [6, 7]), brought super-computing to the level of the desktop computer. However, these low-level programming models carry notable challenges, even to expert programmers. Indeed, fully exploiting the power of hardware accelerators by writing CUDA code often requires significant code optimization effort. This two-hour tutorial attempts to cover the key principles that computer algebraists interested in GPU programming should have in mind. The first half introduces the basics of GPU architecture and the CUDA programming model: no preliminary experience with GPU programming will be assumed; see [10] for a reference. In the second hour, we shall discuss the recent developments in terms of GPU architecture (e.g. dynamic parallelism [12]) and programming models (e.g. OpenMP [1, 9] and OpenACC [8, 11] as well as techniques for improving code performance (e.g MWP-CWP mode [4], TMM model [5], MCM model [3]). Illustrative examples are taken from the CUMODP library [2] for dense polynomial arithmetic over finite fields.","PeriodicalId":102463,"journal":{"name":"Proceedings of the International Workshop on Parallel Symbolic Computation","volume":"159 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multithreaded programming on the GPU: pointers and hints for the computer algebraist\",\"authors\":\"M. M. Maza\",\"doi\":\"10.1145/3115936.3115939\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It is well-known that the advent of hardware acceleration technologies (multicore processors, graphics processing units, field programmable gate arrays) provide vast opportunities for innovation in computing. In particular, GPUs combined with low-level heterogeneous programming models, such as CUDA (the Compute Unified Device Architecture, see [6, 7]), brought super-computing to the level of the desktop computer. However, these low-level programming models carry notable challenges, even to expert programmers. Indeed, fully exploiting the power of hardware accelerators by writing CUDA code often requires significant code optimization effort. This two-hour tutorial attempts to cover the key principles that computer algebraists interested in GPU programming should have in mind. The first half introduces the basics of GPU architecture and the CUDA programming model: no preliminary experience with GPU programming will be assumed; see [10] for a reference. In the second hour, we shall discuss the recent developments in terms of GPU architecture (e.g. dynamic parallelism [12]) and programming models (e.g. OpenMP [1, 9] and OpenACC [8, 11] as well as techniques for improving code performance (e.g MWP-CWP mode [4], TMM model [5], MCM model [3]). Illustrative examples are taken from the CUMODP library [2] for dense polynomial arithmetic over finite fields.\",\"PeriodicalId\":102463,\"journal\":{\"name\":\"Proceedings of the International Workshop on Parallel Symbolic Computation\",\"volume\":\"159 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the International Workshop on Parallel Symbolic Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3115936.3115939\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the International Workshop on Parallel Symbolic Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3115936.3115939","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multithreaded programming on the GPU: pointers and hints for the computer algebraist
It is well-known that the advent of hardware acceleration technologies (multicore processors, graphics processing units, field programmable gate arrays) provide vast opportunities for innovation in computing. In particular, GPUs combined with low-level heterogeneous programming models, such as CUDA (the Compute Unified Device Architecture, see [6, 7]), brought super-computing to the level of the desktop computer. However, these low-level programming models carry notable challenges, even to expert programmers. Indeed, fully exploiting the power of hardware accelerators by writing CUDA code often requires significant code optimization effort. This two-hour tutorial attempts to cover the key principles that computer algebraists interested in GPU programming should have in mind. The first half introduces the basics of GPU architecture and the CUDA programming model: no preliminary experience with GPU programming will be assumed; see [10] for a reference. In the second hour, we shall discuss the recent developments in terms of GPU architecture (e.g. dynamic parallelism [12]) and programming models (e.g. OpenMP [1, 9] and OpenACC [8, 11] as well as techniques for improving code performance (e.g MWP-CWP mode [4], TMM model [5], MCM model [3]). Illustrative examples are taken from the CUMODP library [2] for dense polynomial arithmetic over finite fields.