W. Blume, R. Eigenmann, Keith Faigin, John Grout, Jaejin Lee, T. Lawrence, J. Hoeflinger, D. Padua, Y. Paek, Paul Petersen, W. Pottenger, Lawrence Rauchwerger, P. Tu, Stephen Weatherford
{"title":"Restructuring programs for high-speed computers with Polaris","authors":"W. Blume, R. Eigenmann, Keith Faigin, John Grout, Jaejin Lee, T. Lawrence, J. Hoeflinger, D. Padua, Y. Paek, Paul Petersen, W. Pottenger, Lawrence Rauchwerger, P. Tu, Stephen Weatherford","doi":"10.1109/ICPPW.1996.538601","DOIUrl":null,"url":null,"abstract":"The ability to automatically parallelize standard programming languages results in program portability across a wide range of machine architectures. It is the goal of the Polaris project to develop a new parallelizing compiler that overcomes limitations of current compilers. While current parallelizing compilers may succeed on small kernels, they often fail to extract any meaningful parallelism from whole applications. After a study of application codes, it was concluded that by adding a few new techniques to current compilers, automatic parallelization becomes feasible for a range of whole applications. The techniques needed are interprocedural analysis, scalar and array privatization, symbolic dependence analysis, and advanced induction and reduction recognition and elimination, along with run-time techniques to permit the parallelization of loops with unknown dependence relations.","PeriodicalId":123047,"journal":{"name":"1996 Proceedings ICPP Workshop on Challenges for Parallel Processing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"1996 Proceedings ICPP Workshop on Challenges for Parallel Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPPW.1996.538601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 31
Abstract
The ability to automatically parallelize standard programming languages results in program portability across a wide range of machine architectures. It is the goal of the Polaris project to develop a new parallelizing compiler that overcomes limitations of current compilers. While current parallelizing compilers may succeed on small kernels, they often fail to extract any meaningful parallelism from whole applications. After a study of application codes, it was concluded that by adding a few new techniques to current compilers, automatic parallelization becomes feasible for a range of whole applications. The techniques needed are interprocedural analysis, scalar and array privatization, symbolic dependence analysis, and advanced induction and reduction recognition and elimination, along with run-time techniques to permit the parallelization of loops with unknown dependence relations.