{"title":"Hardware-Assisted Virtualization of Neural Processing Units for Cloud Platforms","authors":"Yuqi Xue, Yiqi Liu, Lifeng Nai, Jian Huang","doi":"arxiv-2408.04104","DOIUrl":null,"url":null,"abstract":"Cloud platforms today have been deploying hardware accelerators like neural\nprocessing units (NPUs) for powering machine learning (ML) inference services.\nTo maximize the resource utilization while ensuring reasonable quality of\nservice, a natural approach is to virtualize NPUs for efficient resource\nsharing for multi-tenant ML services. However, virtualizing NPUs for modern\ncloud platforms is not easy. This is not only due to the lack of system\nabstraction support for NPU hardware, but also due to the lack of architectural\nand ISA support for enabling fine-grained dynamic operator scheduling for\nvirtualized NPUs. We present TCloud, a holistic NPU virtualization framework. We investigate\nvirtualization techniques for NPUs across the entire software and hardware\nstack. TCloud consists of (1) a flexible NPU abstraction called vNPU, which\nenables fine-grained virtualization of the heterogeneous compute units in a\nphysical NPU (pNPU); (2) a vNPU resource allocator that enables pay-as-you-go\ncomputing model and flexible vNPU-to-pNPU mappings for improved resource\nutilization and cost-effectiveness; (3) an ISA extension of modern NPU\narchitecture for facilitating fine-grained tensor operator scheduling for\nmultiple vNPUs. We implement TCloud based on a production-level NPU simulator.\nOur experiments show that TCloud improves the throughput of ML inference\nservices by up to 1.4$\\times$ and reduces the tail latency by up to\n4.6$\\times$, while improving the NPU utilization by 1.2$\\times$ on average,\ncompared to state-of-the-art NPU sharing approaches.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"105 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Operating Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04104","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Cloud platforms today have been deploying hardware accelerators like neural
processing units (NPUs) for powering machine learning (ML) inference services.
To maximize the resource utilization while ensuring reasonable quality of
service, a natural approach is to virtualize NPUs for efficient resource
sharing for multi-tenant ML services. However, virtualizing NPUs for modern
cloud platforms is not easy. This is not only due to the lack of system
abstraction support for NPU hardware, but also due to the lack of architectural
and ISA support for enabling fine-grained dynamic operator scheduling for
virtualized NPUs. We present TCloud, a holistic NPU virtualization framework. We investigate
virtualization techniques for NPUs across the entire software and hardware
stack. TCloud consists of (1) a flexible NPU abstraction called vNPU, which
enables fine-grained virtualization of the heterogeneous compute units in a
physical NPU (pNPU); (2) a vNPU resource allocator that enables pay-as-you-go
computing model and flexible vNPU-to-pNPU mappings for improved resource
utilization and cost-effectiveness; (3) an ISA extension of modern NPU
architecture for facilitating fine-grained tensor operator scheduling for
multiple vNPUs. We implement TCloud based on a production-level NPU simulator.
Our experiments show that TCloud improves the throughput of ML inference
services by up to 1.4$\times$ and reduces the tail latency by up to
4.6$\times$, while improving the NPU utilization by 1.2$\times$ on average,
compared to state-of-the-art NPU sharing approaches.