使用TensorFlow和cuda感知MPI的可扩展分布式DNN训练:表征,设计和性能评估

A. Awan, J. Bédorf, Ching-Hsiang Chu, H. Subramoni, D. Panda
{"title":"使用TensorFlow和cuda感知MPI的可扩展分布式DNN训练:表征,设计和性能评估","authors":"A. Awan, J. Bédorf, Ching-Hsiang Chu, H. Subramoni, D. Panda","doi":"10.1109/CCGRID.2019.00064","DOIUrl":null,"url":null,"abstract":"The current wave of advances in Deep Learning (DL) have been triggered by the availability of large-scale datasets, efficient CPU and GPU hardware, and development of software frameworks like TensorFlow (TF). However, little exists in literature that addresses TensorFlow's distributed training capabilities. In this paper, we provide an in-depth performance characterization and design analysis for distributed TensorFlow. We present three key insights: 1) Horovod designs achieve better performance compared to the official gRPC-based approaches, 2) performance of Horovod design is heavily influenced by the time spent in gradient aggregation that uses the Allreduce primitive, and 3) performance of existing Horovod-MPI implementation is significantly worse compared to Horovod-NCCL. To address this limitation in Horovod-MPI, we propose a novel and efficient CUDA-Aware MPI Allreduce design that 1) exploits CUDA kernels to perform large reductions on the GPU, 2) uses a com-bination of bandwidth-optimal and latency-optimal algorithms, and 3) maintains a pointer cache to avoid CUDA-driver query overheads in the critical path. The proposed designs deliver 5×, 17×, and 29% better performance compared to NCCL2 for small, medium, and large messages. Our designs enable Horovod-MPI to beat state-of-the-art Horovod-NCCL2 by 3% and achieve 90% scaling efficiency for ResNet-50 training on 64 Pascal GPUs.","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"36","resultStr":"{\"title\":\"Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation\",\"authors\":\"A. Awan, J. Bédorf, Ching-Hsiang Chu, H. Subramoni, D. Panda\",\"doi\":\"10.1109/CCGRID.2019.00064\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The current wave of advances in Deep Learning (DL) have been triggered by the availability of large-scale datasets, efficient CPU and GPU hardware, and development of software frameworks like TensorFlow (TF). However, little exists in literature that addresses TensorFlow's distributed training capabilities. In this paper, we provide an in-depth performance characterization and design analysis for distributed TensorFlow. We present three key insights: 1) Horovod designs achieve better performance compared to the official gRPC-based approaches, 2) performance of Horovod design is heavily influenced by the time spent in gradient aggregation that uses the Allreduce primitive, and 3) performance of existing Horovod-MPI implementation is significantly worse compared to Horovod-NCCL. To address this limitation in Horovod-MPI, we propose a novel and efficient CUDA-Aware MPI Allreduce design that 1) exploits CUDA kernels to perform large reductions on the GPU, 2) uses a com-bination of bandwidth-optimal and latency-optimal algorithms, and 3) maintains a pointer cache to avoid CUDA-driver query overheads in the critical path. The proposed designs deliver 5×, 17×, and 29% better performance compared to NCCL2 for small, medium, and large messages. Our designs enable Horovod-MPI to beat state-of-the-art Horovod-NCCL2 by 3% and achieve 90% scaling efficiency for ResNet-50 training on 64 Pascal GPUs.\",\"PeriodicalId\":234571,\"journal\":{\"name\":\"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)\",\"volume\":\"120 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"36\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCGRID.2019.00064\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGRID.2019.00064","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 36

摘要

当前深度学习(DL)的进步浪潮是由大规模数据集的可用性、高效的CPU和GPU硬件以及像TensorFlow (TF)这样的软件框架的开发引发的。然而,关于TensorFlow的分布式训练能力的文献很少。在本文中,我们为分布式TensorFlow提供了深入的性能表征和设计分析。我们提出了三个关键的见解:1)与基于官方grpc的方法相比,Horovod设计实现了更好的性能;2)Horovod设计的性能受到使用Allreduce原语的梯度聚合时间的严重影响;3)现有Horovod- mpi实现的性能与Horovod- nccl相比明显更差。为了解决Horovod-MPI中的这一限制,我们提出了一种新颖而高效的CUDA- aware MPI Allreduce设计,该设计1)利用CUDA内核在GPU上执行大量缩减,2)使用带宽优化和延迟优化算法的组合,以及3)维护指针缓存以避免关键路径上的CUDA驱动程序查询开销。与ncl2相比,所提出的设计在小、中、大消息方面的性能分别提高了5倍、17倍和29%。我们的设计使Horovod-MPI比最先进的Horovod-NCCL2高出3%,并在64 Pascal gpu上实现ResNet-50训练的90%缩放效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation
The current wave of advances in Deep Learning (DL) have been triggered by the availability of large-scale datasets, efficient CPU and GPU hardware, and development of software frameworks like TensorFlow (TF). However, little exists in literature that addresses TensorFlow's distributed training capabilities. In this paper, we provide an in-depth performance characterization and design analysis for distributed TensorFlow. We present three key insights: 1) Horovod designs achieve better performance compared to the official gRPC-based approaches, 2) performance of Horovod design is heavily influenced by the time spent in gradient aggregation that uses the Allreduce primitive, and 3) performance of existing Horovod-MPI implementation is significantly worse compared to Horovod-NCCL. To address this limitation in Horovod-MPI, we propose a novel and efficient CUDA-Aware MPI Allreduce design that 1) exploits CUDA kernels to perform large reductions on the GPU, 2) uses a com-bination of bandwidth-optimal and latency-optimal algorithms, and 3) maintains a pointer cache to avoid CUDA-driver query overheads in the critical path. The proposed designs deliver 5×, 17×, and 29% better performance compared to NCCL2 for small, medium, and large messages. Our designs enable Horovod-MPI to beat state-of-the-art Horovod-NCCL2 by 3% and achieve 90% scaling efficiency for ResNet-50 training on 64 Pascal GPUs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信