{"title":"Empirical Analysis and Modeling of Compute Times of CNN Operations on AWS Cloud","authors":"Ubaid Ullah Hafeez, Anshul Gandhi","doi":"10.1109/IISWC50251.2020.00026","DOIUrl":null,"url":null,"abstract":"Given the widespread use of Convolutional Neural Networks (CNNs) in image classification applications, cloud providers now routinely offer several GPU-equipped instances with varying price points and hardware specifications. From a practitioner's perspective, given an arbitrary CNN, it is not obvious which GPU instance should be employed to minimize the model training time and/or rental cost. This paper presents Ceer, a model-driven approach to determine the optimal GPU instance(s) for any given CNN. Based on an operation-level empirical analysis of various CNNs, we develop regression models for heavy GPU operations (where input size is a key feature) and employ the sample median estimator for light GPU and CPU operations. To estimate the communication overhead between CPU and GPU(s), especially in the case of multi-GPU training, we develop a model that relates this communication overhead to the number of model parameters in the CNN. Evaluation results on AWS Cloud show that Ceer can accurately predict training time and cost (less than 5% average prediction error) across CNNs, enabling 36% −44% cost savings over simpler strategies that employ the cheapest or the latest generation GPU instances.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Workload Characterization (IISWC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISWC50251.2020.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Given the widespread use of Convolutional Neural Networks (CNNs) in image classification applications, cloud providers now routinely offer several GPU-equipped instances with varying price points and hardware specifications. From a practitioner's perspective, given an arbitrary CNN, it is not obvious which GPU instance should be employed to minimize the model training time and/or rental cost. This paper presents Ceer, a model-driven approach to determine the optimal GPU instance(s) for any given CNN. Based on an operation-level empirical analysis of various CNNs, we develop regression models for heavy GPU operations (where input size is a key feature) and employ the sample median estimator for light GPU and CPU operations. To estimate the communication overhead between CPU and GPU(s), especially in the case of multi-GPU training, we develop a model that relates this communication overhead to the number of model parameters in the CNN. Evaluation results on AWS Cloud show that Ceer can accurately predict training time and cost (less than 5% average prediction error) across CNNs, enabling 36% −44% cost savings over simpler strategies that employ the cheapest or the latest generation GPU instances.