探索基于暂态资源的分布式机器学习训练的学习率缩放规则

Joel André, F. Strati, Ana Klimovic
{"title":"探索基于暂态资源的分布式机器学习训练的学习率缩放规则","authors":"Joel André, F. Strati, Ana Klimovic","doi":"10.1145/3565010.3569067","DOIUrl":null,"url":null,"abstract":"Training Machine Learning (ML) models to convergence is a long-running and expensive procedure, as it requires large clusters of high-end accelerators such as GPUs and TPUs. Many ML frameworks have proposed elastic distributed training, which enables using transient resources such as spot VMs in the cloud, reducing the overall cost. However, the availability of transient resources varies over time, creating an inherently dynamic environment that requires special handling of training hyperparameters. Techniques such as gradient accumulation enable using the same hyperparameters upon resource preemptions, however sequentially accumulating gradients stalls synchronous distributed training. On the other hand, scaling the batch size according to the available resources requires tuning of other hyperparameters, such as the learning rate. In this work, we study how learning rate scaling rules perform under dynamic environments when the batch size changes frequently and drastically, as we observed in real cloud clusters. We build a PyTorch-based system to evaluate Stochastic Gradient Descent on Image Recognition and Object Detection tasks under various learning rate scaling rules and resource availability traces. We observe minor or no degradation in model convergence when choosing the correct learning rate scaling rule. Identifying the appropriate scaling rule for a given model is non-trivial. Automating this decision remains an open question.","PeriodicalId":325359,"journal":{"name":"Proceedings of the 3rd International Workshop on Distributed Machine Learning","volume":"46 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Exploring learning rate scaling rules for distributed ML training on transient resources\",\"authors\":\"Joel André, F. Strati, Ana Klimovic\",\"doi\":\"10.1145/3565010.3569067\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training Machine Learning (ML) models to convergence is a long-running and expensive procedure, as it requires large clusters of high-end accelerators such as GPUs and TPUs. Many ML frameworks have proposed elastic distributed training, which enables using transient resources such as spot VMs in the cloud, reducing the overall cost. However, the availability of transient resources varies over time, creating an inherently dynamic environment that requires special handling of training hyperparameters. Techniques such as gradient accumulation enable using the same hyperparameters upon resource preemptions, however sequentially accumulating gradients stalls synchronous distributed training. On the other hand, scaling the batch size according to the available resources requires tuning of other hyperparameters, such as the learning rate. In this work, we study how learning rate scaling rules perform under dynamic environments when the batch size changes frequently and drastically, as we observed in real cloud clusters. We build a PyTorch-based system to evaluate Stochastic Gradient Descent on Image Recognition and Object Detection tasks under various learning rate scaling rules and resource availability traces. We observe minor or no degradation in model convergence when choosing the correct learning rate scaling rule. Identifying the appropriate scaling rule for a given model is non-trivial. Automating this decision remains an open question.\",\"PeriodicalId\":325359,\"journal\":{\"name\":\"Proceedings of the 3rd International Workshop on Distributed Machine Learning\",\"volume\":\"46 3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 3rd International Workshop on Distributed Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3565010.3569067\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Workshop on Distributed Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3565010.3569067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

训练机器学习(ML)模型使其收敛是一个长期且昂贵的过程,因为它需要大量的高端加速器集群,如gpu和tpu。许多ML框架都提出了弹性分布式训练,这使得可以使用云中的瞬时资源(如spot vm),从而降低了总体成本。然而,瞬时资源的可用性随着时间的推移而变化,创建了一个内在的动态环境,需要对训练超参数进行特殊处理。梯度积累等技术允许在资源抢占时使用相同的超参数,但是顺序积累梯度会阻碍同步分布式训练。另一方面,根据可用资源缩放批大小需要调优其他超参数,例如学习率。在这项工作中,我们研究了当批量大小频繁而剧烈地变化时,学习率缩放规则在动态环境下的执行情况,正如我们在真实云集群中观察到的那样。我们建立了一个基于pytorch的系统来评估随机梯度下降在不同学习率缩放规则和资源可用性跟踪下的图像识别和目标检测任务。我们观察到,当选择正确的学习率缩放规则时,模型收敛性只有很小的退化或没有退化。为给定的模型确定适当的缩放规则是非常重要的。自动化这个决定仍然是一个悬而未决的问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Exploring learning rate scaling rules for distributed ML training on transient resources
Training Machine Learning (ML) models to convergence is a long-running and expensive procedure, as it requires large clusters of high-end accelerators such as GPUs and TPUs. Many ML frameworks have proposed elastic distributed training, which enables using transient resources such as spot VMs in the cloud, reducing the overall cost. However, the availability of transient resources varies over time, creating an inherently dynamic environment that requires special handling of training hyperparameters. Techniques such as gradient accumulation enable using the same hyperparameters upon resource preemptions, however sequentially accumulating gradients stalls synchronous distributed training. On the other hand, scaling the batch size according to the available resources requires tuning of other hyperparameters, such as the learning rate. In this work, we study how learning rate scaling rules perform under dynamic environments when the batch size changes frequently and drastically, as we observed in real cloud clusters. We build a PyTorch-based system to evaluate Stochastic Gradient Descent on Image Recognition and Object Detection tasks under various learning rate scaling rules and resource availability traces. We observe minor or no degradation in model convergence when choosing the correct learning rate scaling rule. Identifying the appropriate scaling rule for a given model is non-trivial. Automating this decision remains an open question.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信