{"title":"面向MPI中的异步元计算","authors":"A. Sodan","doi":"10.1109/HPCSA.2002.1019158","DOIUrl":null,"url":null,"abstract":"Metacomputing so far has been done mostly in more or less static configurations. However, applications with dynamic irregular behavior are increasing in significance and the computing platforms more often are time-sharing environments with varying system load. Thus, possibilities for dynamic connection and dynamic workload migration are becoming important. The paper discusses an approach to perform asynchronous workload balancing using the standard parallel library MPI. MPI and threads typically live in more or less separated worlds and the thread extension of MPI-2 is mainly meant to exploit more efficiently per SMP node within a model which is still mostly SPMD. We have extended MPI by dynamic mechanisms to automatically balance workload on the basis of threads and dynamic status/resource monitoring. Our extended library TeMPI is designed to run with in a minimum version with MPICH and thus MPICH-G2 in the Globus grid environment.","PeriodicalId":111862,"journal":{"name":"Proceedings 16th Annual International Symposium on High Performance Computing Systems and Applications","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Towards asynchronous metacomputing in MPI\",\"authors\":\"A. Sodan\",\"doi\":\"10.1109/HPCSA.2002.1019158\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Metacomputing so far has been done mostly in more or less static configurations. However, applications with dynamic irregular behavior are increasing in significance and the computing platforms more often are time-sharing environments with varying system load. Thus, possibilities for dynamic connection and dynamic workload migration are becoming important. The paper discusses an approach to perform asynchronous workload balancing using the standard parallel library MPI. MPI and threads typically live in more or less separated worlds and the thread extension of MPI-2 is mainly meant to exploit more efficiently per SMP node within a model which is still mostly SPMD. We have extended MPI by dynamic mechanisms to automatically balance workload on the basis of threads and dynamic status/resource monitoring. Our extended library TeMPI is designed to run with in a minimum version with MPICH and thus MPICH-G2 in the Globus grid environment.\",\"PeriodicalId\":111862,\"journal\":{\"name\":\"Proceedings 16th Annual International Symposium on High Performance Computing Systems and Applications\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-06-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 16th Annual International Symposium on High Performance Computing Systems and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPCSA.2002.1019158\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 16th Annual International Symposium on High Performance Computing Systems and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCSA.2002.1019158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Metacomputing so far has been done mostly in more or less static configurations. However, applications with dynamic irregular behavior are increasing in significance and the computing platforms more often are time-sharing environments with varying system load. Thus, possibilities for dynamic connection and dynamic workload migration are becoming important. The paper discusses an approach to perform asynchronous workload balancing using the standard parallel library MPI. MPI and threads typically live in more or less separated worlds and the thread extension of MPI-2 is mainly meant to exploit more efficiently per SMP node within a model which is still mostly SPMD. We have extended MPI by dynamic mechanisms to automatically balance workload on the basis of threads and dynamic status/resource monitoring. Our extended library TeMPI is designed to run with in a minimum version with MPICH and thus MPICH-G2 in the Globus grid environment.