{"title":"任意嵌套仿射循环的无同步自动并行化","authors":"T. Klimek, M. Pałkowski, W. Bielecki","doi":"10.1109/SBAC-PADW.2016.16","DOIUrl":null,"url":null,"abstract":"This paper presents a new approach for extracting synchronization-free parallelism available in program loop nests. The approach allows for extracting parallelism for arbitrarily nested parametric loop nests, where the loop bounds and data accesses are affine functions of loop indices and symbolic parameters. Parallelization is realized using the transitive closure of a dependence graph. Speed-up of parallel code produced by means of the approach is studied using the NAS benchmark suite. Parallelism of loop nests is obtained by creating a kernel of computations represented in the OpenMP standard to be executed independently on multi-core computers. Results of an experimental study carried out by means of the many integrated core architecture Intel Xeon Phi is discussed.","PeriodicalId":186179,"journal":{"name":"2016 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Synchronization-Free Automatic Parallelization for Arbitrarily Nested Affine Loops\",\"authors\":\"T. Klimek, M. Pałkowski, W. Bielecki\",\"doi\":\"10.1109/SBAC-PADW.2016.16\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a new approach for extracting synchronization-free parallelism available in program loop nests. The approach allows for extracting parallelism for arbitrarily nested parametric loop nests, where the loop bounds and data accesses are affine functions of loop indices and symbolic parameters. Parallelization is realized using the transitive closure of a dependence graph. Speed-up of parallel code produced by means of the approach is studied using the NAS benchmark suite. Parallelism of loop nests is obtained by creating a kernel of computations represented in the OpenMP standard to be executed independently on multi-core computers. Results of an experimental study carried out by means of the many integrated core architecture Intel Xeon Phi is discussed.\",\"PeriodicalId\":186179,\"journal\":{\"name\":\"2016 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SBAC-PADW.2016.16\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBAC-PADW.2016.16","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Synchronization-Free Automatic Parallelization for Arbitrarily Nested Affine Loops
This paper presents a new approach for extracting synchronization-free parallelism available in program loop nests. The approach allows for extracting parallelism for arbitrarily nested parametric loop nests, where the loop bounds and data accesses are affine functions of loop indices and symbolic parameters. Parallelization is realized using the transitive closure of a dependence graph. Speed-up of parallel code produced by means of the approach is studied using the NAS benchmark suite. Parallelism of loop nests is obtained by creating a kernel of computations represented in the OpenMP standard to be executed independently on multi-core computers. Results of an experimental study carried out by means of the many integrated core architecture Intel Xeon Phi is discussed.