用于模型并行训练的激活和梯度压缩

Pub Date : 2024-03-25 DOI:10.1134/S1064562423701314
M. I. Rudakov, A. N. Beznosikov, Ya. A. Kholodov, A. V. Gasnikov
{"title":"用于模型并行训练的激活和梯度压缩","authors":"M. I. Rudakov,&nbsp;A. N. Beznosikov,&nbsp;Ya. A. Kholodov,&nbsp;A. V. Gasnikov","doi":"10.1134/S1064562423701314","DOIUrl":null,"url":null,"abstract":"<p>Large neural networks require enormous computational clusters of machines. Model-parallel training, when the model architecture is partitioned sequentially between workers, is a popular approach for training modern models. Information compression can be applied to decrease workers’ communication time, as it is often a bottleneck in such systems. This work explores how simultaneous compression of activations and gradients in model-parallel distributed training setup affects convergence. We analyze compression methods such as quantization and TopK compression, and also experiment with error compensation techniques. Moreover, we employ TopK with AQ-SGD per-batch error feedback approach. We conduct experiments on image classification and language model fine-tuning tasks. Our findings demonstrate that gradients require milder compression rates than activations. We observe that <span>\\(K = 10\\% \\)</span> is the lowest TopK compression level, which does not harm model convergence severely. Experiments also show that models trained with TopK perform well only when compression is also applied during inference. We find that error feedback techniques do not improve model-parallel training compared to plain compression, but allow model inference without compression with almost no quality drop. Finally, when applied with the AQ-SGD approach, TopK stronger than with <span>\\(K = 30\\% \\)</span> worsens model performance significantly.</p>","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Activations and Gradients Compression for Model-Parallel Training\",\"authors\":\"M. I. Rudakov,&nbsp;A. N. Beznosikov,&nbsp;Ya. A. Kholodov,&nbsp;A. V. Gasnikov\",\"doi\":\"10.1134/S1064562423701314\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Large neural networks require enormous computational clusters of machines. Model-parallel training, when the model architecture is partitioned sequentially between workers, is a popular approach for training modern models. Information compression can be applied to decrease workers’ communication time, as it is often a bottleneck in such systems. This work explores how simultaneous compression of activations and gradients in model-parallel distributed training setup affects convergence. We analyze compression methods such as quantization and TopK compression, and also experiment with error compensation techniques. Moreover, we employ TopK with AQ-SGD per-batch error feedback approach. We conduct experiments on image classification and language model fine-tuning tasks. Our findings demonstrate that gradients require milder compression rates than activations. We observe that <span>\\\\(K = 10\\\\% \\\\)</span> is the lowest TopK compression level, which does not harm model convergence severely. Experiments also show that models trained with TopK perform well only when compression is also applied during inference. We find that error feedback techniques do not improve model-parallel training compared to plain compression, but allow model inference without compression with almost no quality drop. Finally, when applied with the AQ-SGD approach, TopK stronger than with <span>\\\\(K = 30\\\\% \\\\)</span> worsens model performance significantly.</p>\",\"PeriodicalId\":0,\"journal\":{\"name\":\"\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0,\"publicationDate\":\"2024-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://link.springer.com/article/10.1134/S1064562423701314\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"100","ListUrlMain":"https://link.springer.com/article/10.1134/S1064562423701314","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型神经网络需要庞大的机器计算集群。模型并行训练,即在工作人员之间按顺序分割模型架构,是训练现代模型的常用方法。信息压缩可用于减少工作人员的通信时间,因为通信时间往往是此类系统的瓶颈。本研究探讨了在模型并行分布式训练设置中同时压缩激活和梯度对收敛性的影响。我们分析了量化和 TopK 压缩等压缩方法,还尝试了误差补偿技术。此外,我们还采用了 TopK 与 AQ-SGD 每批次误差反馈方法。我们对图像分类和语言模型微调任务进行了实验。我们的研究结果表明,梯度比激活需要更温和的压缩率。我们观察到,(K = 10\% \)是最低的 TopK 压缩级别,它不会严重损害模型的收敛性。实验还表明,只有在推理过程中也应用压缩时,用 TopK 训练的模型才会表现良好。我们发现,与普通压缩相比,误差反馈技术并不能改善模型并行训练,但在不进行压缩的情况下,模型推理几乎不会出现质量下降。最后,当与 AQ-SGD 方法一起应用时,TopK 强于 \(K = 30\% \)会显著恶化模型性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Activations and Gradients Compression for Model-Parallel Training

分享
查看原文
Activations and Gradients Compression for Model-Parallel Training

Large neural networks require enormous computational clusters of machines. Model-parallel training, when the model architecture is partitioned sequentially between workers, is a popular approach for training modern models. Information compression can be applied to decrease workers’ communication time, as it is often a bottleneck in such systems. This work explores how simultaneous compression of activations and gradients in model-parallel distributed training setup affects convergence. We analyze compression methods such as quantization and TopK compression, and also experiment with error compensation techniques. Moreover, we employ TopK with AQ-SGD per-batch error feedback approach. We conduct experiments on image classification and language model fine-tuning tasks. Our findings demonstrate that gradients require milder compression rates than activations. We observe that \(K = 10\% \) is the lowest TopK compression level, which does not harm model convergence severely. Experiments also show that models trained with TopK perform well only when compression is also applied during inference. We find that error feedback techniques do not improve model-parallel training compared to plain compression, but allow model inference without compression with almost no quality drop. Finally, when applied with the AQ-SGD approach, TopK stronger than with \(K = 30\% \) worsens model performance significantly.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信