{"title":"KunlunTVM: A Compilation Framework for Kunlun Chip Supporting Both Training and Inference","authors":"Jun Zeng, Mingyan Kou, Hailong Yao","doi":"10.1145/3526241.3530316","DOIUrl":null,"url":null,"abstract":"With the rapid development of deep learning, training big neural network models demands huge amount of computing power.Therefore, many accelerators are designed to meet the performance requirements. Recently, series of Kunlun chips have been released, which claim comparable performance over GPUs. However, there lacks an end-to-end compiler to support both training and inference on Kunlun chip,leaving large performance optimization space to be explored. This paper presents KunlunTVM, the first end-to-end compiler based on TVM, supporting both training and inference tasks on Kunlun Chip. Experimental results show that KunlunTVM achieves up to 5x training performance improvement over the existing framework PaddlePaddle supporting Kunlun chip. It is noteworthy that the proposed methods are general and extensible for the TVM framework targeting different backends.","PeriodicalId":188228,"journal":{"name":"Proceedings of the Great Lakes Symposium on VLSI 2022","volume":"141 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Great Lakes Symposium on VLSI 2022","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3526241.3530316","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the rapid development of deep learning, training big neural network models demands huge amount of computing power.Therefore, many accelerators are designed to meet the performance requirements. Recently, series of Kunlun chips have been released, which claim comparable performance over GPUs. However, there lacks an end-to-end compiler to support both training and inference on Kunlun chip,leaving large performance optimization space to be explored. This paper presents KunlunTVM, the first end-to-end compiler based on TVM, supporting both training and inference tasks on Kunlun Chip. Experimental results show that KunlunTVM achieves up to 5x training performance improvement over the existing framework PaddlePaddle supporting Kunlun chip. It is noteworthy that the proposed methods are general and extensible for the TVM framework targeting different backends.