Anastasios Dimitriou;Lei Xun;Jonathon Hare;Geoff V. Merrett
{"title":"可重构硬件上早期退出动态神经网络的实现","authors":"Anastasios Dimitriou;Lei Xun;Jonathon Hare;Geoff V. Merrett","doi":"10.1109/TCAD.2024.3519055","DOIUrl":null,"url":null,"abstract":"Early-exiting is a strategy that is becoming popular in deep neural networks (DNNs), as it can lead to faster execution and a reduction in the computational intensity of inference. To achieve this, intermediate classifiers abstract information from the input samples to strategically stop forward propagation and generate an output at an earlier stage. Confidence criteria are used to identify easier-to-recognize samples over the ones that need further filtering. However, such dynamic DNNs have only been realized in conventional computing systems (CPU+GPU) using libraries designed for static networks. In this article, we first explore the feasibility and benefits of realizing early-exit dynamic DNNs on field-programmable gate arrays (FPGAs), a platform already proven to be highly effective for neural network applications. We consider two approaches for implementing and executing the intermediate classifiers: 1) pipeline, which uses existing hardware and 2) parallel, which uses additional dedicated modules. We model their energy needs and execution time and explore their performance using the BranchyNet early-exit approach on LeNet-5, AlexNet, VGG19, and ResNet32, and a Xilinx ZCU106 Evaluation Board. We found that the dynamic approaches are at least 24% faster than a static network executed on an FPGA, consuming a minimum of <inline-formula> <tex-math>$1.32\\times $ </tex-math></inline-formula> lower energy. We further observe that FPGAs can enhance the performance of early-exit dynamic DNNs by minimizing the complexities introduced by the decision intermediate classifiers through parallel execution. Finally, we compare the two approaches and identify which is best for different network types and confidence levels.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 6","pages":"2195-2203"},"PeriodicalIF":2.9000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Realization of Early-Exit Dynamic Neural Networks on Reconfigurable Hardware\",\"authors\":\"Anastasios Dimitriou;Lei Xun;Jonathon Hare;Geoff V. Merrett\",\"doi\":\"10.1109/TCAD.2024.3519055\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Early-exiting is a strategy that is becoming popular in deep neural networks (DNNs), as it can lead to faster execution and a reduction in the computational intensity of inference. To achieve this, intermediate classifiers abstract information from the input samples to strategically stop forward propagation and generate an output at an earlier stage. Confidence criteria are used to identify easier-to-recognize samples over the ones that need further filtering. However, such dynamic DNNs have only been realized in conventional computing systems (CPU+GPU) using libraries designed for static networks. In this article, we first explore the feasibility and benefits of realizing early-exit dynamic DNNs on field-programmable gate arrays (FPGAs), a platform already proven to be highly effective for neural network applications. We consider two approaches for implementing and executing the intermediate classifiers: 1) pipeline, which uses existing hardware and 2) parallel, which uses additional dedicated modules. We model their energy needs and execution time and explore their performance using the BranchyNet early-exit approach on LeNet-5, AlexNet, VGG19, and ResNet32, and a Xilinx ZCU106 Evaluation Board. We found that the dynamic approaches are at least 24% faster than a static network executed on an FPGA, consuming a minimum of <inline-formula> <tex-math>$1.32\\\\times $ </tex-math></inline-formula> lower energy. We further observe that FPGAs can enhance the performance of early-exit dynamic DNNs by minimizing the complexities introduced by the decision intermediate classifiers through parallel execution. Finally, we compare the two approaches and identify which is best for different network types and confidence levels.\",\"PeriodicalId\":13251,\"journal\":{\"name\":\"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems\",\"volume\":\"44 6\",\"pages\":\"2195-2203\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10804199/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10804199/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Realization of Early-Exit Dynamic Neural Networks on Reconfigurable Hardware
Early-exiting is a strategy that is becoming popular in deep neural networks (DNNs), as it can lead to faster execution and a reduction in the computational intensity of inference. To achieve this, intermediate classifiers abstract information from the input samples to strategically stop forward propagation and generate an output at an earlier stage. Confidence criteria are used to identify easier-to-recognize samples over the ones that need further filtering. However, such dynamic DNNs have only been realized in conventional computing systems (CPU+GPU) using libraries designed for static networks. In this article, we first explore the feasibility and benefits of realizing early-exit dynamic DNNs on field-programmable gate arrays (FPGAs), a platform already proven to be highly effective for neural network applications. We consider two approaches for implementing and executing the intermediate classifiers: 1) pipeline, which uses existing hardware and 2) parallel, which uses additional dedicated modules. We model their energy needs and execution time and explore their performance using the BranchyNet early-exit approach on LeNet-5, AlexNet, VGG19, and ResNet32, and a Xilinx ZCU106 Evaluation Board. We found that the dynamic approaches are at least 24% faster than a static network executed on an FPGA, consuming a minimum of $1.32\times $ lower energy. We further observe that FPGAs can enhance the performance of early-exit dynamic DNNs by minimizing the complexities introduced by the decision intermediate classifiers through parallel execution. Finally, we compare the two approaches and identify which is best for different network types and confidence levels.
期刊介绍:
The purpose of this Transactions is to publish papers of interest to individuals in the area of computer-aided design of integrated circuits and systems composed of analog, digital, mixed-signal, optical, or microwave components. The aids include methods, models, algorithms, and man-machine interfaces for system-level, physical and logical design including: planning, synthesis, partitioning, modeling, simulation, layout, verification, testing, hardware-software co-design and documentation of integrated circuit and system designs of all complexities. Design tools and techniques for evaluating and designing integrated circuits and systems for metrics such as performance, power, reliability, testability, and security are a focus.