{"title":"共享内存编程的真正限制","authors":"D. Pressel, M. Behr, S. Thompson","doi":"10.21236/ada373358","DOIUrl":null,"url":null,"abstract":"Abstract : Shared memory parallel computers have the reputation for being the easiest type of parallel computers to program. At the same time, they are frequently regarded as being the least scalable type of parallel computer. In particular, shared memory parallel computers are frequently programmed using a form of loop-level parallelism (usually based on some combination of compiler directives and automatic parallelization). However, in discussing this form of parallelism, the experts in the field routinely say that it will not scale past 4-16 processors (the number varies among experts). This report investigates what the true limitations are to this type of parallel programming. The discussions are largely based on the experiences that the authors had in porting the Implicit Computational Fluid Dynamics Code (F3D) to numerous shared memory systems from SGI, Cray, and Convex.","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"22 1","pages":"2048-2054"},"PeriodicalIF":0.0000,"publicationDate":"1999-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The True Limitations of Shared Memory Programming\",\"authors\":\"D. Pressel, M. Behr, S. Thompson\",\"doi\":\"10.21236/ada373358\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract : Shared memory parallel computers have the reputation for being the easiest type of parallel computers to program. At the same time, they are frequently regarded as being the least scalable type of parallel computer. In particular, shared memory parallel computers are frequently programmed using a form of loop-level parallelism (usually based on some combination of compiler directives and automatic parallelization). However, in discussing this form of parallelism, the experts in the field routinely say that it will not scale past 4-16 processors (the number varies among experts). This report investigates what the true limitations are to this type of parallel programming. The discussions are largely based on the experiences that the authors had in porting the Implicit Computational Fluid Dynamics Code (F3D) to numerous shared memory systems from SGI, Cray, and Convex.\",\"PeriodicalId\":93135,\"journal\":{\"name\":\"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...\",\"volume\":\"22 1\",\"pages\":\"2048-2054\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1999-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21236/ada373358\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21236/ada373358","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Abstract : Shared memory parallel computers have the reputation for being the easiest type of parallel computers to program. At the same time, they are frequently regarded as being the least scalable type of parallel computer. In particular, shared memory parallel computers are frequently programmed using a form of loop-level parallelism (usually based on some combination of compiler directives and automatic parallelization). However, in discussing this form of parallelism, the experts in the field routinely say that it will not scale past 4-16 processors (the number varies among experts). This report investigates what the true limitations are to this type of parallel programming. The discussions are largely based on the experiences that the authors had in porting the Implicit Computational Fluid Dynamics Code (F3D) to numerous shared memory systems from SGI, Cray, and Convex.