{"title":"Efficient and Fast High-Performance Library Generation for Deep Learning Accelerators","authors":"Jun Bi;Yuanbo Wen;Xiaqing Li;Yongwei Zhao;Yuxuan Guo;Enshuai Zhou;Xing Hu;Zidong Du;Ling Li;Huaping Chen;Tianshi Chen;Qi Guo","doi":"10.1109/TC.2024.3475575","DOIUrl":null,"url":null,"abstract":"The widespread adoption of deep learning accelerators (DLAs) underscores their pivotal role in improving the performance and energy efficiency of neural networks. To fully leverage the capabilities of these accelerators, exploration-based library generation approaches have been widely used to substantially reduce software development overhead. However, these approaches have been challenged by issues related to sub-optimal optimization results and excessive optimization overheads. In this paper, we propose \n<small>Heron</small>\n to generate high-performance libraries of DLAs in an efficient and fast way. The key is automatically enforcing massive constraints through the entire program generation process and guiding the exploration with an accurate pre-trained cost model. \n<small>Heron</small>\n represents the search space as a constrained satisfaction problem (CSP) and explores the space via evolving the CSPs. Thus, the sophisticated constraints of the search space are strictly preserved during the entire exploration process. The exploration algorithm has the flexibility to engage in space exploration using either online-trained models or pre-trained models. Experimental results demonstrate that \n<small>Heron</small>\n averagely achieves 2.71\n<inline-formula><tex-math>$\\times$</tex-math></inline-formula>\n speedup over three state-of-the-art automatic generation approaches. Also, compared to vendor-provided hand-tuned libraries, \n<small>Heron</small>\n achieves a 2.00\n<inline-formula><tex-math>$\\times$</tex-math></inline-formula>\n speedup on average. When employing a pre-trained model, \n<small>Heron</small>\n achieves 11.6\n<inline-formula><tex-math>$\\times$</tex-math></inline-formula>\n compilation time speedup, incurring a minor impact on execution time.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"155-169"},"PeriodicalIF":3.6000,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10707341/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
The widespread adoption of deep learning accelerators (DLAs) underscores their pivotal role in improving the performance and energy efficiency of neural networks. To fully leverage the capabilities of these accelerators, exploration-based library generation approaches have been widely used to substantially reduce software development overhead. However, these approaches have been challenged by issues related to sub-optimal optimization results and excessive optimization overheads. In this paper, we propose
Heron
to generate high-performance libraries of DLAs in an efficient and fast way. The key is automatically enforcing massive constraints through the entire program generation process and guiding the exploration with an accurate pre-trained cost model.
Heron
represents the search space as a constrained satisfaction problem (CSP) and explores the space via evolving the CSPs. Thus, the sophisticated constraints of the search space are strictly preserved during the entire exploration process. The exploration algorithm has the flexibility to engage in space exploration using either online-trained models or pre-trained models. Experimental results demonstrate that
Heron
averagely achieves 2.71
$\times$
speedup over three state-of-the-art automatic generation approaches. Also, compared to vendor-provided hand-tuned libraries,
Heron
achieves a 2.00
$\times$
speedup on average. When employing a pre-trained model,
Heron
achieves 11.6
$\times$
compilation time speedup, incurring a minor impact on execution time.
期刊介绍:
The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.