{"title":"Measuring the Impact of Gradient Accumulation on Cloud-based Distributed Training","authors":"Zimeng Huang, Bo Jiang, Tian Guo, Yunzhuo Liu","doi":"10.1109/CCGrid57682.2023.00040","DOIUrl":null,"url":null,"abstract":"Gradient accumulation (GA) is a commonly adopted technique for addressing the GPU memory shortage problem in model training. It reduces memory consumption at the cost of increased computation time. Although widely used, its benefits to model training have not been systematically studied. Our work evaluates and summarizes the benefits of GA, especially in cloud-based distributed training scenarios, where training cost is determined by both execution time and resource consumption. We focus on how GA can be utilized to balance execution time and resource consumption to achieve the lowest bills. Through empirical evaluations on AliCloud platforms, we observe that the total training cost can be reduced by 31.2% on average with a 17.3% increase in training time, when GA is introduced in the large-model and small-bandwidth scenarios with data-parallel training strategies. Besides, taking micro-batch size into optimization can further decrease training time and cost by 21.2% and 24.8% on average, respectively, for hybrid-parallel strategies in large-model and GPU training scenarios.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"06 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGrid57682.2023.00040","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Gradient accumulation (GA) is a commonly adopted technique for addressing the GPU memory shortage problem in model training. It reduces memory consumption at the cost of increased computation time. Although widely used, its benefits to model training have not been systematically studied. Our work evaluates and summarizes the benefits of GA, especially in cloud-based distributed training scenarios, where training cost is determined by both execution time and resource consumption. We focus on how GA can be utilized to balance execution time and resource consumption to achieve the lowest bills. Through empirical evaluations on AliCloud platforms, we observe that the total training cost can be reduced by 31.2% on average with a 17.3% increase in training time, when GA is introduced in the large-model and small-bandwidth scenarios with data-parallel training strategies. Besides, taking micro-batch size into optimization can further decrease training time and cost by 21.2% and 24.8% on average, respectively, for hybrid-parallel strategies in large-model and GPU training scenarios.