Masahiro Arai, F. Akagi, Saneyasu Yamaguchi, K. Yoshida
{"title":"利用消息传递接口提高并行化三维数据阵列转置算法的性能","authors":"Masahiro Arai, F. Akagi, Saneyasu Yamaguchi, K. Yoshida","doi":"10.1109/CANDARW.2018.00094","DOIUrl":null,"url":null,"abstract":"Parallelization with a message passing interface (MPI) is useful for improving the performance of the LLG micromagnetics simulator used for analysis of magnetization behavior. However, it is necessary to transpose elements of 3-D data arrays to be consistent in the data. In this paper, we investigated two methods for improving the performance of the transpose processes. One divides 6-transpose-processes in a triple for loop into 6-triple for loops. The other transposes the elements of the 3-D data arrays in each process before the data is integrated by using MPI_Allgather(). We compared the effects of the two methods on improving performances on two supercomputers: Oakforest-PACS and Reedbush-U. The results show that the former method was only effective on Oakforest-PACS, but the latter method was effective on both computers.","PeriodicalId":329439,"journal":{"name":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving Performance of Transposition Algorithm of 3-D Data Array for Parallelization Using Message Passing Interface\",\"authors\":\"Masahiro Arai, F. Akagi, Saneyasu Yamaguchi, K. Yoshida\",\"doi\":\"10.1109/CANDARW.2018.00094\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Parallelization with a message passing interface (MPI) is useful for improving the performance of the LLG micromagnetics simulator used for analysis of magnetization behavior. However, it is necessary to transpose elements of 3-D data arrays to be consistent in the data. In this paper, we investigated two methods for improving the performance of the transpose processes. One divides 6-transpose-processes in a triple for loop into 6-triple for loops. The other transposes the elements of the 3-D data arrays in each process before the data is integrated by using MPI_Allgather(). We compared the effects of the two methods on improving performances on two supercomputers: Oakforest-PACS and Reedbush-U. The results show that the former method was only effective on Oakforest-PACS, but the latter method was effective on both computers.\",\"PeriodicalId\":329439,\"journal\":{\"name\":\"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)\",\"volume\":\"49 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CANDARW.2018.00094\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CANDARW.2018.00094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving Performance of Transposition Algorithm of 3-D Data Array for Parallelization Using Message Passing Interface
Parallelization with a message passing interface (MPI) is useful for improving the performance of the LLG micromagnetics simulator used for analysis of magnetization behavior. However, it is necessary to transpose elements of 3-D data arrays to be consistent in the data. In this paper, we investigated two methods for improving the performance of the transpose processes. One divides 6-transpose-processes in a triple for loop into 6-triple for loops. The other transposes the elements of the 3-D data arrays in each process before the data is integrated by using MPI_Allgather(). We compared the effects of the two methods on improving performances on two supercomputers: Oakforest-PACS and Reedbush-U. The results show that the former method was only effective on Oakforest-PACS, but the latter method was effective on both computers.