{"title":"一种考虑处理器亲和性的I/O聚合器分配方案","authors":"Kwangho Cha, S. Maeng","doi":"10.1109/ICPPW.2011.23","DOIUrl":null,"url":null,"abstract":"As the number of processes in parallel applications increases, the importance of parallel I/O is also emphasized. Collective I/O is the specialized parallel I/O which provides the function of single-file-based parallel I/O. Collective I/O in popular message-passing interface (MPI) libraries follows a two-phase I/O scheme, in which the particular processes, namely I/O aggregators, perform important roles by engaging communications and I/O operations. Although there have been many previous works to improve the performance of collective I/O, it is hard to find a study about an I/O aggregator assignment considering multi-core architecture. Nowadays, many HPC systems use the multi-core system as a computational node. Therefore, it is important to understand the characteristics of multi-core architecture, such as processor affinity, in order to increase the performance of parallel applications. In this paper, we discovered that the communication costs in collective I/O were different according to the placement of I/O aggregators, where the computational nodes consisted of multi-core system and each node had multiple I/O aggregators. We also proposed a modified collective I/O scheme, in order to reduce the communication costs of collective I/O, by proper placement of I/O aggregators. The performance of our proposed scheme was examined on a Linux cluster system and the result demonstrated performance improvements in the range of 7.08% to 90.46% for read operations and 20.67% to 90.18% for write operations.","PeriodicalId":173271,"journal":{"name":"2011 40th International Conference on Parallel Processing Workshops","volume":"112 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"An Efficient I/O Aggregator Assignment Scheme for Collective I/O Considering Processor Affinity\",\"authors\":\"Kwangho Cha, S. Maeng\",\"doi\":\"10.1109/ICPPW.2011.23\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As the number of processes in parallel applications increases, the importance of parallel I/O is also emphasized. Collective I/O is the specialized parallel I/O which provides the function of single-file-based parallel I/O. Collective I/O in popular message-passing interface (MPI) libraries follows a two-phase I/O scheme, in which the particular processes, namely I/O aggregators, perform important roles by engaging communications and I/O operations. Although there have been many previous works to improve the performance of collective I/O, it is hard to find a study about an I/O aggregator assignment considering multi-core architecture. Nowadays, many HPC systems use the multi-core system as a computational node. Therefore, it is important to understand the characteristics of multi-core architecture, such as processor affinity, in order to increase the performance of parallel applications. In this paper, we discovered that the communication costs in collective I/O were different according to the placement of I/O aggregators, where the computational nodes consisted of multi-core system and each node had multiple I/O aggregators. We also proposed a modified collective I/O scheme, in order to reduce the communication costs of collective I/O, by proper placement of I/O aggregators. The performance of our proposed scheme was examined on a Linux cluster system and the result demonstrated performance improvements in the range of 7.08% to 90.46% for read operations and 20.67% to 90.18% for write operations.\",\"PeriodicalId\":173271,\"journal\":{\"name\":\"2011 40th International Conference on Parallel Processing Workshops\",\"volume\":\"112 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 40th International Conference on Parallel Processing Workshops\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPPW.2011.23\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 40th International Conference on Parallel Processing Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPPW.2011.23","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Efficient I/O Aggregator Assignment Scheme for Collective I/O Considering Processor Affinity
As the number of processes in parallel applications increases, the importance of parallel I/O is also emphasized. Collective I/O is the specialized parallel I/O which provides the function of single-file-based parallel I/O. Collective I/O in popular message-passing interface (MPI) libraries follows a two-phase I/O scheme, in which the particular processes, namely I/O aggregators, perform important roles by engaging communications and I/O operations. Although there have been many previous works to improve the performance of collective I/O, it is hard to find a study about an I/O aggregator assignment considering multi-core architecture. Nowadays, many HPC systems use the multi-core system as a computational node. Therefore, it is important to understand the characteristics of multi-core architecture, such as processor affinity, in order to increase the performance of parallel applications. In this paper, we discovered that the communication costs in collective I/O were different according to the placement of I/O aggregators, where the computational nodes consisted of multi-core system and each node had multiple I/O aggregators. We also proposed a modified collective I/O scheme, in order to reduce the communication costs of collective I/O, by proper placement of I/O aggregators. The performance of our proposed scheme was examined on a Linux cluster system and the result demonstrated performance improvements in the range of 7.08% to 90.46% for read operations and 20.67% to 90.18% for write operations.