S. Serebryakov, D. Milojicic, N. Vassilieva, S. Fleischman, R. Clark
{"title":"Deep Learning Cookbook: Recipes for your AI Infrastructure and Applications","authors":"S. Serebryakov, D. Milojicic, N. Vassilieva, S. Fleischman, R. Clark","doi":"10.1109/ICRC.2019.8914704","DOIUrl":null,"url":null,"abstract":"Deep Learning (DL) has gained wide adoption and different DL models have been deployed for an expanding number of applications. It is being used both for inference at the edge and for training in datacenters. Applications include image recognition, video analytics, pattern recognition in networking traffic, and many others. Different applications rely on different neural network models, and it has proven difficult to predict resource requirements for different models and applications. This leads to the nonsystematic and suboptimal selection of computational resources for DL applications resulting in overpaying for underutilized infrastructure or, even worse, the deployment of models on underpowered hardware and missed service level objectives. In this paper we present the DL Cookbook, a toolset that a) helps with benchmarking models on different hardware, b) guides the use of DL and c) provides reference designs. Automated benchmarking collects performance data for different DL workloads (training and inference with different models) on various hardware and software configurations. A web-based tool guides a choice of optimal hardware and software configuration via analysis of collected performance data and applying performance models. And finally, it offers reference hardware/software stacks for particular classes of deep learning workloads. This way the DL Cookbook helps both customers and hardware vendors match optimal DL models to the available hardware and vice versa, in case of acquisition, specify required hardware to models in question. Finally, DL Cookbook helps with reproducibility of results.","PeriodicalId":297574,"journal":{"name":"2019 IEEE International Conference on Rebooting Computing (ICRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Rebooting Computing (ICRC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRC.2019.8914704","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Deep Learning (DL) has gained wide adoption and different DL models have been deployed for an expanding number of applications. It is being used both for inference at the edge and for training in datacenters. Applications include image recognition, video analytics, pattern recognition in networking traffic, and many others. Different applications rely on different neural network models, and it has proven difficult to predict resource requirements for different models and applications. This leads to the nonsystematic and suboptimal selection of computational resources for DL applications resulting in overpaying for underutilized infrastructure or, even worse, the deployment of models on underpowered hardware and missed service level objectives. In this paper we present the DL Cookbook, a toolset that a) helps with benchmarking models on different hardware, b) guides the use of DL and c) provides reference designs. Automated benchmarking collects performance data for different DL workloads (training and inference with different models) on various hardware and software configurations. A web-based tool guides a choice of optimal hardware and software configuration via analysis of collected performance data and applying performance models. And finally, it offers reference hardware/software stacks for particular classes of deep learning workloads. This way the DL Cookbook helps both customers and hardware vendors match optimal DL models to the available hardware and vice versa, in case of acquisition, specify required hardware to models in question. Finally, DL Cookbook helps with reproducibility of results.