{"title":"XSP: Across-Stack Profiling and Analysis of Machine Learning Models on GPUs","authors":"Cheng Li, Abdul Dakkak, Jinjun Xiong, Wei Wei, Lingjie Xu, Wen-Mei Hwu","doi":"10.1109/IPDPS47924.2020.00042","DOIUrl":null,"url":null,"abstract":"There has been a rapid proliferation of machine learning/deep learning (ML) models and wide adoption of them in many application domains. This has made profiling and characterization of ML model performance an increasingly pressing task for both hardware designers and system providers, as they would like to offer the best possible system to serve ML models with the target latency, throughput, cost, and energy requirements while maximizing resource utilization. Such an endeavor is challenging as the characteristics of an ML model depend on the interplay between the model, framework, system libraries, and the hardware (or the HW/SW stack). Existing profiling tools are disjoint, however, and only focus on profiling within a particular level of the stack, which limits the thoroughness and usefulness of the profiling results.This paper proposes XSP — an across-stack profiling design that gives a holistic and hierarchical view of ML model execution. XSP leverages distributed tracing to aggregate and correlate profile data from different sources. XSP introduces a leveled and iterative measurement approach that accurately captures the latencies at all levels of the HW/SW stack in spite of the profiling overhead. We couple the profiling design with an automated analysis pipeline to systematically analyze 65 state-of-the-art ML models. We demonstrate that XSP provides insights which would be difficult to discern otherwise.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"37 12","pages":"326-327"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPS47924.2020.00042","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
There has been a rapid proliferation of machine learning/deep learning (ML) models and wide adoption of them in many application domains. This has made profiling and characterization of ML model performance an increasingly pressing task for both hardware designers and system providers, as they would like to offer the best possible system to serve ML models with the target latency, throughput, cost, and energy requirements while maximizing resource utilization. Such an endeavor is challenging as the characteristics of an ML model depend on the interplay between the model, framework, system libraries, and the hardware (or the HW/SW stack). Existing profiling tools are disjoint, however, and only focus on profiling within a particular level of the stack, which limits the thoroughness and usefulness of the profiling results.This paper proposes XSP — an across-stack profiling design that gives a holistic and hierarchical view of ML model execution. XSP leverages distributed tracing to aggregate and correlate profile data from different sources. XSP introduces a leveled and iterative measurement approach that accurately captures the latencies at all levels of the HW/SW stack in spite of the profiling overhead. We couple the profiling design with an automated analysis pipeline to systematically analyze 65 state-of-the-art ML models. We demonstrate that XSP provides insights which would be difficult to discern otherwise.
机器学习/深度学习(ML)模型迅速普及,并在许多应用领域得到广泛应用。这使得对 ML 模型性能进行剖析和表征成为硬件设计人员和系统提供商日益紧迫的任务,因为他们希望提供最佳系统,以满足 ML 模型对延迟、吞吐量、成本和能源的目标要求,同时最大限度地提高资源利用率。这种努力极具挑战性,因为 ML 模型的特性取决于模型、框架、系统库和硬件(或 HW/SW 堆栈)之间的相互作用。然而,现有的剖析工具相互脱节,只关注堆栈中特定层次的剖析,这限制了剖析结果的全面性和实用性。本文提出的 XSP 是一种跨堆栈剖析设计,可提供 ML 模型执行的整体和分层视图。XSP 利用分布式跟踪来聚合和关联来自不同来源的剖析数据。XSP 引入了一种分层和迭代测量方法,尽管存在剖析开销,但仍能准确捕捉硬件/软件堆栈各级的延迟。我们将剖析设计与自动分析管道相结合,对 65 个最先进的 ML 模型进行了系统分析。我们证明了 XSP 能够提供其他方法难以发现的洞察力。