{"title":"大型密集线性系统的MPI并行解","authors":"Jian Zhang, C. Maple","doi":"10.1109/PCEE.2002.1115280","DOIUrl":null,"url":null,"abstract":"This paper first presents two implementations of parallel Gaussian elimination using MPI, one uses cyclic data mapping and pipelined point-to-point communication, the other one uses blocked data mapping and MPI collective communication. Then, theoretical performance analysis for the two implementations is given, and the impacts of different data distribution and communication methods are compared.","PeriodicalId":444003,"journal":{"name":"Proceedings. International Conference on Parallel Computing in Electrical Engineering","volume":"18 1-2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Parallel solutions of large dense linear systems using MPI\",\"authors\":\"Jian Zhang, C. Maple\",\"doi\":\"10.1109/PCEE.2002.1115280\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper first presents two implementations of parallel Gaussian elimination using MPI, one uses cyclic data mapping and pipelined point-to-point communication, the other one uses blocked data mapping and MPI collective communication. Then, theoretical performance analysis for the two implementations is given, and the impacts of different data distribution and communication methods are compared.\",\"PeriodicalId\":444003,\"journal\":{\"name\":\"Proceedings. International Conference on Parallel Computing in Electrical Engineering\",\"volume\":\"18 1-2\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-09-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. International Conference on Parallel Computing in Electrical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PCEE.2002.1115280\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. International Conference on Parallel Computing in Electrical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PCEE.2002.1115280","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Parallel solutions of large dense linear systems using MPI
This paper first presents two implementations of parallel Gaussian elimination using MPI, one uses cyclic data mapping and pipelined point-to-point communication, the other one uses blocked data mapping and MPI collective communication. Then, theoretical performance analysis for the two implementations is given, and the impacts of different data distribution and communication methods are compared.