{"title":"分布式内存程序的编译器优化","authors":"Rajesh K. Gupta","doi":"10.1109/SHPCC.1992.232651","DOIUrl":null,"url":null,"abstract":"The single-program multiple-data (SPMD) mode of execution is an effective approach for exploiting parallelism in programs written using the shared-memory programming model on distributed memory machines. However, during SPMD execution one must consider dependencies due to the transfer of data among the processors. Such dependencies can be avoided by reordering the communication operations (sends and receives). However, no formal framework has been developed to explicitly recognize the represent such dependencies. The author identifies two types of dependencies, namely communication dependencies and scheduling dependencies, and proposes to represent these dependencies explicitly in the program dependency graph. Next, he presents program transformations that use this dependency information in transforming the program and increasing the degree of parallelism exploited. Finally, the author presents program transformations that reduce communication related run-time overhead.<<ETX>>","PeriodicalId":254515,"journal":{"name":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Compiler optimizations for distributed-memory programs\",\"authors\":\"Rajesh K. Gupta\",\"doi\":\"10.1109/SHPCC.1992.232651\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The single-program multiple-data (SPMD) mode of execution is an effective approach for exploiting parallelism in programs written using the shared-memory programming model on distributed memory machines. However, during SPMD execution one must consider dependencies due to the transfer of data among the processors. Such dependencies can be avoided by reordering the communication operations (sends and receives). However, no formal framework has been developed to explicitly recognize the represent such dependencies. The author identifies two types of dependencies, namely communication dependencies and scheduling dependencies, and proposes to represent these dependencies explicitly in the program dependency graph. Next, he presents program transformations that use this dependency information in transforming the program and increasing the degree of parallelism exploited. Finally, the author presents program transformations that reduce communication related run-time overhead.<<ETX>>\",\"PeriodicalId\":254515,\"journal\":{\"name\":\"Proceedings Scalable High Performance Computing Conference SHPCC-92.\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1992-04-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Scalable High Performance Computing Conference SHPCC-92.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SHPCC.1992.232651\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Scalable High Performance Computing Conference SHPCC-92.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SHPCC.1992.232651","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Compiler optimizations for distributed-memory programs
The single-program multiple-data (SPMD) mode of execution is an effective approach for exploiting parallelism in programs written using the shared-memory programming model on distributed memory machines. However, during SPMD execution one must consider dependencies due to the transfer of data among the processors. Such dependencies can be avoided by reordering the communication operations (sends and receives). However, no formal framework has been developed to explicitly recognize the represent such dependencies. The author identifies two types of dependencies, namely communication dependencies and scheduling dependencies, and proposes to represent these dependencies explicitly in the program dependency graph. Next, he presents program transformations that use this dependency information in transforming the program and increasing the degree of parallelism exploited. Finally, the author presents program transformations that reduce communication related run-time overhead.<>