{"title":"准备HPF编程环境","authors":"A. Veen","doi":"10.1109/M-PDT.1994.329808","DOIUrl":null,"url":null,"abstract":"The European Prepare consortium has constructed an integrated programming environment to develop, analyze, and restructure HPF programs. The consortium consists of three industrial and six academic partners and is coordinated by ACE, Europe’s leading compiler manufacturer. It represents most of Europe’s expertise in automatic parallelization for distributed-memory computers, making directly available, for instance, the experience gained during the development of the Vienna Fortran Compilation System. The Prepare environment is based on three tightly integrated components. A parallelization engine transforms the source program’s original data-parallel form into SPMD form. An interactive engine reports to the programmer the extent to which the system can parallelize the program, indicates the obstacles preventing parallelization, facilitates the removal of such obstacles, and provides performance measures. A compilation system generates highly optimized code that fully exploits the target platform’s intraprocessor parallelism. The Prepare project’s unique strength is the tight integration of these components. The interactive engine can access the internal representation of the compiler. The compiler and the parallelization engine use each other’s analysis information and mutually influence each other’s optimization decisions. This integration brings several advantages to the user. Interaction is much more natural, because the communication between the user and the system is always in terms of the original source program. The user does not have to be aware of the elaborate transformations performed by the compiler. Performance is much better, because the parallelizer, vectorizer, optimizer, and code generator all cooperate (rather than compete) to exploit the many performance-enhancing features that high-end massively parallel platforms provide. This is crucial because of the often complicated interaction between these features. Without special tools, this high level of integration is not compatible with the strong modularization required for software as complex as a parallelizing compiler. We adopted the Cosy compilation system developed in the Compare project. In Cosy, a large set of engines (concurrent tasks that each perform one algorithm) access a shared internal representation of the program, gradually transforming it and enriching it with analysis information. Compiling phases do not have to be ordered linearly, which is a great advantage for a compiler that combines vectorization, parallelization, and sophisticated optimizations. Another advantage is that on a (shared-memory) parallel host the engines work in parallel. We have found that the HPF subset is well designed, except for some loose ends concerning subprogram interfaces and the relation between multiple PROCESSOR directives. We question the usefulness of explicit dynamic distributions. T o our surprise, much of the complexity of compiling HPF stems from its Fortran 90 base. For instance, features like assumed-shape arrays require an elaborate system of runtime descriptors and sophisticated analysis to recognize the cases where they can be omitted. Such requirements permeate all aspects of distributed array support. Environment","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The Prepare HPF Programming Environment\",\"authors\":\"A. Veen\",\"doi\":\"10.1109/M-PDT.1994.329808\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The European Prepare consortium has constructed an integrated programming environment to develop, analyze, and restructure HPF programs. The consortium consists of three industrial and six academic partners and is coordinated by ACE, Europe’s leading compiler manufacturer. It represents most of Europe’s expertise in automatic parallelization for distributed-memory computers, making directly available, for instance, the experience gained during the development of the Vienna Fortran Compilation System. The Prepare environment is based on three tightly integrated components. A parallelization engine transforms the source program’s original data-parallel form into SPMD form. An interactive engine reports to the programmer the extent to which the system can parallelize the program, indicates the obstacles preventing parallelization, facilitates the removal of such obstacles, and provides performance measures. A compilation system generates highly optimized code that fully exploits the target platform’s intraprocessor parallelism. The Prepare project’s unique strength is the tight integration of these components. The interactive engine can access the internal representation of the compiler. The compiler and the parallelization engine use each other’s analysis information and mutually influence each other’s optimization decisions. This integration brings several advantages to the user. Interaction is much more natural, because the communication between the user and the system is always in terms of the original source program. The user does not have to be aware of the elaborate transformations performed by the compiler. Performance is much better, because the parallelizer, vectorizer, optimizer, and code generator all cooperate (rather than compete) to exploit the many performance-enhancing features that high-end massively parallel platforms provide. This is crucial because of the often complicated interaction between these features. Without special tools, this high level of integration is not compatible with the strong modularization required for software as complex as a parallelizing compiler. We adopted the Cosy compilation system developed in the Compare project. In Cosy, a large set of engines (concurrent tasks that each perform one algorithm) access a shared internal representation of the program, gradually transforming it and enriching it with analysis information. Compiling phases do not have to be ordered linearly, which is a great advantage for a compiler that combines vectorization, parallelization, and sophisticated optimizations. Another advantage is that on a (shared-memory) parallel host the engines work in parallel. We have found that the HPF subset is well designed, except for some loose ends concerning subprogram interfaces and the relation between multiple PROCESSOR directives. We question the usefulness of explicit dynamic distributions. T o our surprise, much of the complexity of compiling HPF stems from its Fortran 90 base. For instance, features like assumed-shape arrays require an elaborate system of runtime descriptors and sophisticated analysis to recognize the cases where they can be omitted. Such requirements permeate all aspects of distributed array support. Environment\",\"PeriodicalId\":325213,\"journal\":{\"name\":\"IEEE Parallel & Distributed Technology: Systems & Applications\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Parallel & Distributed Technology: Systems & Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/M-PDT.1994.329808\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Parallel & Distributed Technology: Systems & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/M-PDT.1994.329808","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The European Prepare consortium has constructed an integrated programming environment to develop, analyze, and restructure HPF programs. The consortium consists of three industrial and six academic partners and is coordinated by ACE, Europe’s leading compiler manufacturer. It represents most of Europe’s expertise in automatic parallelization for distributed-memory computers, making directly available, for instance, the experience gained during the development of the Vienna Fortran Compilation System. The Prepare environment is based on three tightly integrated components. A parallelization engine transforms the source program’s original data-parallel form into SPMD form. An interactive engine reports to the programmer the extent to which the system can parallelize the program, indicates the obstacles preventing parallelization, facilitates the removal of such obstacles, and provides performance measures. A compilation system generates highly optimized code that fully exploits the target platform’s intraprocessor parallelism. The Prepare project’s unique strength is the tight integration of these components. The interactive engine can access the internal representation of the compiler. The compiler and the parallelization engine use each other’s analysis information and mutually influence each other’s optimization decisions. This integration brings several advantages to the user. Interaction is much more natural, because the communication between the user and the system is always in terms of the original source program. The user does not have to be aware of the elaborate transformations performed by the compiler. Performance is much better, because the parallelizer, vectorizer, optimizer, and code generator all cooperate (rather than compete) to exploit the many performance-enhancing features that high-end massively parallel platforms provide. This is crucial because of the often complicated interaction between these features. Without special tools, this high level of integration is not compatible with the strong modularization required for software as complex as a parallelizing compiler. We adopted the Cosy compilation system developed in the Compare project. In Cosy, a large set of engines (concurrent tasks that each perform one algorithm) access a shared internal representation of the program, gradually transforming it and enriching it with analysis information. Compiling phases do not have to be ordered linearly, which is a great advantage for a compiler that combines vectorization, parallelization, and sophisticated optimizations. Another advantage is that on a (shared-memory) parallel host the engines work in parallel. We have found that the HPF subset is well designed, except for some loose ends concerning subprogram interfaces and the relation between multiple PROCESSOR directives. We question the usefulness of explicit dynamic distributions. T o our surprise, much of the complexity of compiling HPF stems from its Fortran 90 base. For instance, features like assumed-shape arrays require an elaborate system of runtime descriptors and sophisticated analysis to recognize the cases where they can be omitted. Such requirements permeate all aspects of distributed array support. Environment