{"title":"针对基于infiniband的集群的高效、可扩展的全对全个性化交换","authors":"S. Sur, Hyun-Wook Jin, D. Panda","doi":"10.1109/ICPP.2004.1327932","DOIUrl":null,"url":null,"abstract":"The all-to-all personalized exchange is the most dense collective communication function offered by the MPI specification. The operation involves every process sending a different message to all other participating processes. This collective operation is essential for many parallel scientific applications. With increasing system and message sizes, it becomes challenging to offer a fast, scalable and efficient implementation of this operation. InfiniBand is an emerging modern interconnect. It offers very low latency, high bandwidth and one-sided operations like RDMA write. Its advanced features like RDMA write gather allow us to design and implement all-to-all algorithms much more efficiently than in the past. Our aim in This work is to design efficient and scalable implementations of traditional personalized exchange algorithms. We present two novel approaches towards designing all-to-all algorithms for short and long messages respectively. The hypercube RDMA write gather and direct eager schemes effectively leverage the RDMA and RDMA with write gather mechanisms offered by InfiniBand. Performance evaluation of our design and implementation reveals that it is able to reduce the all-to-all communication time by upto a factor of 3.07 for 32 byte messages on a 16 node InfiniBand cluster. Our analytical models suggest that the proposed designs perform 64% better on InfiniBand clusters with 1024 nodes for 4k message size.","PeriodicalId":106240,"journal":{"name":"International Conference on Parallel Processing, 2004. ICPP 2004.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"24","resultStr":"{\"title\":\"Efficient and scalable all-to-all personalized exchange for InfiniBand-based clusters\",\"authors\":\"S. Sur, Hyun-Wook Jin, D. Panda\",\"doi\":\"10.1109/ICPP.2004.1327932\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The all-to-all personalized exchange is the most dense collective communication function offered by the MPI specification. The operation involves every process sending a different message to all other participating processes. This collective operation is essential for many parallel scientific applications. With increasing system and message sizes, it becomes challenging to offer a fast, scalable and efficient implementation of this operation. InfiniBand is an emerging modern interconnect. It offers very low latency, high bandwidth and one-sided operations like RDMA write. Its advanced features like RDMA write gather allow us to design and implement all-to-all algorithms much more efficiently than in the past. Our aim in This work is to design efficient and scalable implementations of traditional personalized exchange algorithms. We present two novel approaches towards designing all-to-all algorithms for short and long messages respectively. The hypercube RDMA write gather and direct eager schemes effectively leverage the RDMA and RDMA with write gather mechanisms offered by InfiniBand. Performance evaluation of our design and implementation reveals that it is able to reduce the all-to-all communication time by upto a factor of 3.07 for 32 byte messages on a 16 node InfiniBand cluster. Our analytical models suggest that the proposed designs perform 64% better on InfiniBand clusters with 1024 nodes for 4k message size.\",\"PeriodicalId\":106240,\"journal\":{\"name\":\"International Conference on Parallel Processing, 2004. ICPP 2004.\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2004-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"24\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Parallel Processing, 2004. ICPP 2004.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPP.2004.1327932\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Parallel Processing, 2004. ICPP 2004.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPP.2004.1327932","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Efficient and scalable all-to-all personalized exchange for InfiniBand-based clusters
The all-to-all personalized exchange is the most dense collective communication function offered by the MPI specification. The operation involves every process sending a different message to all other participating processes. This collective operation is essential for many parallel scientific applications. With increasing system and message sizes, it becomes challenging to offer a fast, scalable and efficient implementation of this operation. InfiniBand is an emerging modern interconnect. It offers very low latency, high bandwidth and one-sided operations like RDMA write. Its advanced features like RDMA write gather allow us to design and implement all-to-all algorithms much more efficiently than in the past. Our aim in This work is to design efficient and scalable implementations of traditional personalized exchange algorithms. We present two novel approaches towards designing all-to-all algorithms for short and long messages respectively. The hypercube RDMA write gather and direct eager schemes effectively leverage the RDMA and RDMA with write gather mechanisms offered by InfiniBand. Performance evaluation of our design and implementation reveals that it is able to reduce the all-to-all communication time by upto a factor of 3.07 for 32 byte messages on a 16 node InfiniBand cluster. Our analytical models suggest that the proposed designs perform 64% better on InfiniBand clusters with 1024 nodes for 4k message size.