{"title":"VTA编译指令流的数据访问和调度优化设计","authors":"Ruohan Cheng , Yanshuo Gao , Chenglong Zeng , Yinghai Zhao , Kuizhi Mei","doi":"10.1016/j.future.2025.108165","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, the development and rapid implementation of convolutional neural network models have become a key research area in the field of deep learning, and model deployment schemes based on deep learning compilers have been widely studied. Acceleration of the process of convolution operations and improvement of model inference performance is one of the important research areas in the field of deep learning compilers. In this paper, based on the open source deep learning compilation framework TVM and the architecture of the deep learning accelerator VTA, we propose a minimum data access design based on the input prioritized schedule and the on-chip weight-memory reuse scheme, which provides an optimization scheme with generality for inference acceleration of convolutional neural networks. By applying the optimized schedule scheme proposed, we can avoid redundant data accesses for convolutional computation with proper shape. Comparison experiments show that the model inference time of YOLOv3 is reduced by about 10<span><math><mo>%</mo></math></span> with limited hardware resources.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"176 ","pages":"Article 108165"},"PeriodicalIF":6.2000,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Design of data access and schedule optimization for VTA compiled instruction streams\",\"authors\":\"Ruohan Cheng , Yanshuo Gao , Chenglong Zeng , Yinghai Zhao , Kuizhi Mei\",\"doi\":\"10.1016/j.future.2025.108165\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent years, the development and rapid implementation of convolutional neural network models have become a key research area in the field of deep learning, and model deployment schemes based on deep learning compilers have been widely studied. Acceleration of the process of convolution operations and improvement of model inference performance is one of the important research areas in the field of deep learning compilers. In this paper, based on the open source deep learning compilation framework TVM and the architecture of the deep learning accelerator VTA, we propose a minimum data access design based on the input prioritized schedule and the on-chip weight-memory reuse scheme, which provides an optimization scheme with generality for inference acceleration of convolutional neural networks. By applying the optimized schedule scheme proposed, we can avoid redundant data accesses for convolutional computation with proper shape. Comparison experiments show that the model inference time of YOLOv3 is reduced by about 10<span><math><mo>%</mo></math></span> with limited hardware resources.</div></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":\"176 \",\"pages\":\"Article 108165\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2025-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X25004595\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25004595","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Design of data access and schedule optimization for VTA compiled instruction streams
In recent years, the development and rapid implementation of convolutional neural network models have become a key research area in the field of deep learning, and model deployment schemes based on deep learning compilers have been widely studied. Acceleration of the process of convolution operations and improvement of model inference performance is one of the important research areas in the field of deep learning compilers. In this paper, based on the open source deep learning compilation framework TVM and the architecture of the deep learning accelerator VTA, we propose a minimum data access design based on the input prioritized schedule and the on-chip weight-memory reuse scheme, which provides an optimization scheme with generality for inference acceleration of convolutional neural networks. By applying the optimized schedule scheme proposed, we can avoid redundant data accesses for convolutional computation with proper shape. Comparison experiments show that the model inference time of YOLOv3 is reduced by about 10 with limited hardware resources.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.