Hao-Jun Michael Shi, Tsung-Hsien Lee, Shintaro Iwasaki, Jose Gallego-Posada, Zhijing Li, Kaushik Rangadurai, Dheevatsa Mudigere, Michael Rabbat
{"title":"用于大规模训练神经网络的分布式洗发水优化器的分布式数据并行PyTorch实现","authors":"Hao-Jun Michael Shi, Tsung-Hsien Lee, Shintaro Iwasaki, Jose Gallego-Posada, Zhijing Li, Kaushik Rangadurai, Dheevatsa Mudigere, Michael Rabbat","doi":"arxiv-2309.06497","DOIUrl":null,"url":null,"abstract":"Shampoo is an online and stochastic optimization algorithm belonging to the\nAdaGrad family of methods for training neural networks. It constructs a\nblock-diagonal preconditioner where each block consists of a coarse Kronecker\nproduct approximation to full-matrix AdaGrad for each parameter of the neural\nnetwork. In this work, we provide a complete description of the algorithm as\nwell as the performance optimizations that our implementation leverages to\ntrain deep networks at-scale in PyTorch. Our implementation enables fast\nmulti-GPU distributed data-parallel training by distributing the memory and\ncomputation associated with blocks of each parameter via PyTorch's DTensor data\nstructure and performing an AllGather primitive on the computed search\ndirections at each iteration. This major performance enhancement enables us to\nachieve at most a 10% performance reduction in per-step wall-clock time\ncompared against standard diagonal-scaling-based adaptive gradient methods. We\nvalidate our implementation by performing an ablation study on training\nImageNet ResNet50, demonstrating Shampoo's superiority over standard training\nrecipes with minimal hyperparameter tuning.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Distributed Data-Parallel PyTorch Implementation of the Distributed Shampoo Optimizer for Training Neural Networks At-Scale\",\"authors\":\"Hao-Jun Michael Shi, Tsung-Hsien Lee, Shintaro Iwasaki, Jose Gallego-Posada, Zhijing Li, Kaushik Rangadurai, Dheevatsa Mudigere, Michael Rabbat\",\"doi\":\"arxiv-2309.06497\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Shampoo is an online and stochastic optimization algorithm belonging to the\\nAdaGrad family of methods for training neural networks. It constructs a\\nblock-diagonal preconditioner where each block consists of a coarse Kronecker\\nproduct approximation to full-matrix AdaGrad for each parameter of the neural\\nnetwork. In this work, we provide a complete description of the algorithm as\\nwell as the performance optimizations that our implementation leverages to\\ntrain deep networks at-scale in PyTorch. Our implementation enables fast\\nmulti-GPU distributed data-parallel training by distributing the memory and\\ncomputation associated with blocks of each parameter via PyTorch's DTensor data\\nstructure and performing an AllGather primitive on the computed search\\ndirections at each iteration. This major performance enhancement enables us to\\nachieve at most a 10% performance reduction in per-step wall-clock time\\ncompared against standard diagonal-scaling-based adaptive gradient methods. We\\nvalidate our implementation by performing an ablation study on training\\nImageNet ResNet50, demonstrating Shampoo's superiority over standard training\\nrecipes with minimal hyperparameter tuning.\",\"PeriodicalId\":501256,\"journal\":{\"name\":\"arXiv - CS - Mathematical Software\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Mathematical Software\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2309.06497\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Mathematical Software","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2309.06497","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Distributed Data-Parallel PyTorch Implementation of the Distributed Shampoo Optimizer for Training Neural Networks At-Scale
Shampoo is an online and stochastic optimization algorithm belonging to the
AdaGrad family of methods for training neural networks. It constructs a
block-diagonal preconditioner where each block consists of a coarse Kronecker
product approximation to full-matrix AdaGrad for each parameter of the neural
network. In this work, we provide a complete description of the algorithm as
well as the performance optimizations that our implementation leverages to
train deep networks at-scale in PyTorch. Our implementation enables fast
multi-GPU distributed data-parallel training by distributing the memory and
computation associated with blocks of each parameter via PyTorch's DTensor data
structure and performing an AllGather primitive on the computed search
directions at each iteration. This major performance enhancement enables us to
achieve at most a 10% performance reduction in per-step wall-clock time
compared against standard diagonal-scaling-based adaptive gradient methods. We
validate our implementation by performing an ablation study on training
ImageNet ResNet50, demonstrating Shampoo's superiority over standard training
recipes with minimal hyperparameter tuning.