{"title":"为MIMD架构编译SIMD程序","authors":"M. J. Quinn, P. Hatcher","doi":"10.1109/ICCL.1990.63785","DOIUrl":null,"url":null,"abstract":"A summary of the advantages of data parallel languages a subclass of SIMD (single-instruction-stream, multiple-data-stream languages) is presented, and it is shown how programs written in a data parallel language can be compiled into loosely-synchronous MIMD (multiple-instruction-stream, multiple-data-stream) programs suitable for efficient execution on multicomputers. It is shown that the compiler must first locate the points at which message passing is required. These points are identical to the synchronization points. Therefore, message passing-primitives also synchronize the processors. Second, the compiler must transform the control structure of the input program to bring message-passing primitives to the outermost level. In order to allow a single physical processor to emulate a number of processing elements, the compiler must insert FOR loops around the blocks of code that are delimited by the calls to the message-passing primitives. Finally, data flow analysis can be used to eliminate some calls on message-passing routines and to combine multiple shorter messages into single, longer message whenever possible.<<ETX>>","PeriodicalId":317186,"journal":{"name":"Proceedings. 1990 International Conference on Computer Languages","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1990-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Compiling SIMD programs for MIMD architectures\",\"authors\":\"M. J. Quinn, P. Hatcher\",\"doi\":\"10.1109/ICCL.1990.63785\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A summary of the advantages of data parallel languages a subclass of SIMD (single-instruction-stream, multiple-data-stream languages) is presented, and it is shown how programs written in a data parallel language can be compiled into loosely-synchronous MIMD (multiple-instruction-stream, multiple-data-stream) programs suitable for efficient execution on multicomputers. It is shown that the compiler must first locate the points at which message passing is required. These points are identical to the synchronization points. Therefore, message passing-primitives also synchronize the processors. Second, the compiler must transform the control structure of the input program to bring message-passing primitives to the outermost level. In order to allow a single physical processor to emulate a number of processing elements, the compiler must insert FOR loops around the blocks of code that are delimited by the calls to the message-passing primitives. Finally, data flow analysis can be used to eliminate some calls on message-passing routines and to combine multiple shorter messages into single, longer message whenever possible.<<ETX>>\",\"PeriodicalId\":317186,\"journal\":{\"name\":\"Proceedings. 1990 International Conference on Computer Languages\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1990-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. 1990 International Conference on Computer Languages\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCL.1990.63785\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. 1990 International Conference on Computer Languages","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCL.1990.63785","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A summary of the advantages of data parallel languages a subclass of SIMD (single-instruction-stream, multiple-data-stream languages) is presented, and it is shown how programs written in a data parallel language can be compiled into loosely-synchronous MIMD (multiple-instruction-stream, multiple-data-stream) programs suitable for efficient execution on multicomputers. It is shown that the compiler must first locate the points at which message passing is required. These points are identical to the synchronization points. Therefore, message passing-primitives also synchronize the processors. Second, the compiler must transform the control structure of the input program to bring message-passing primitives to the outermost level. In order to allow a single physical processor to emulate a number of processing elements, the compiler must insert FOR loops around the blocks of code that are delimited by the calls to the message-passing primitives. Finally, data flow analysis can be used to eliminate some calls on message-passing routines and to combine multiple shorter messages into single, longer message whenever possible.<>