{"title":"Compare with the Traditional Heterogeneous Solution: Accelerate Neural Network Algorithm through Heterogeneous Integrated CPU+NPU Chip on Server","authors":"Xiancheng Lin, Xiangyu Zhou, Rongkai Liu, Xiang Gao","doi":"10.1109/CCAI57533.2023.10201248","DOIUrl":null,"url":null,"abstract":"The increasing popularity of artificial intelligence (AI) requires the ability to process intensive data and efficient heterogeneous computing power. As a result, a heterogeneous integration scheme involving both central processing units (CPUs) and neural processing units (NPUs) has become increasingly prevalent in various edge terminals, such as mobile phones. Compared with traditional separated heterogeneous solutions, the integration scheme can effectively reduce the distance and number of data transmissions, thereby accelerating deep neural network (DNN) models and improving energy efficiency. Due to the low power requirements of cloud computing, heterogeneous integration solutions are beginning to be used in the design of processor architectures for servers. The TF16110 integrates NPUs into server CPUs, creating an efficient parallel computing solution for servers that lack GPUs or other AI acceleration devices. In this paper, we evaluate and analyze commonly used DNN models. Compared with NVIDIA’s TX2 GPU, the heterogeneous integrated CPU+NPU design can provide similar computational power and achieve 5x higher energy efficiency and 10x cost-effectiveness under the premise of ensuring accuracy","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCAI57533.2023.10201248","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The increasing popularity of artificial intelligence (AI) requires the ability to process intensive data and efficient heterogeneous computing power. As a result, a heterogeneous integration scheme involving both central processing units (CPUs) and neural processing units (NPUs) has become increasingly prevalent in various edge terminals, such as mobile phones. Compared with traditional separated heterogeneous solutions, the integration scheme can effectively reduce the distance and number of data transmissions, thereby accelerating deep neural network (DNN) models and improving energy efficiency. Due to the low power requirements of cloud computing, heterogeneous integration solutions are beginning to be used in the design of processor architectures for servers. The TF16110 integrates NPUs into server CPUs, creating an efficient parallel computing solution for servers that lack GPUs or other AI acceleration devices. In this paper, we evaluate and analyze commonly used DNN models. Compared with NVIDIA’s TX2 GPU, the heterogeneous integrated CPU+NPU design can provide similar computational power and achieve 5x higher energy efficiency and 10x cost-effectiveness under the premise of ensuring accuracy