Patrick Plagwitz , Frank Hannig , Jürgen Teich , Oliver Keszocze
{"title":"基于dsl的SNN加速器的Chisel设计","authors":"Patrick Plagwitz , Frank Hannig , Jürgen Teich , Oliver Keszocze","doi":"10.1016/j.micpro.2025.105187","DOIUrl":null,"url":null,"abstract":"<div><div>Neural Networks (NNs) are a very active field of research that also has wide-ranging applications in industry. An emerging type of NN that is promising for hardware acceleration and low energy requirements are Spiking Neural Networks (SNNs). But design automation in terms of accelerator circuit generation is still lacking proper search techniques for optimization of network parameters including the selection of proper neuron models and spike encodings. They are often restricted to implement a single network setting and/or a fixed hardware architecture.</div><div>In this paper, we present a novel multi-layer Domain-Specific Language (DSL) for constructing sequential circuits, including building blocks for pipelines supporting hazard detection. As the host language, we use Chisel, a hardware construction language allowing to express hardware at Register-Transfer Level and above. In contrast to applying High-Level Synthesis, we introduce a domain-specific language (DSL) for SNN accelerator design based on Chisel by defining building blocks for SNNs. After introducing this DSL, we present a full SNN accelerator generation framework that covers all phases, from training to deployment. Also proposed is a design space exploration for various SNN accelerator designs using different neuron models, their parametrizations as well as spike encodings. The generated designs are evaluated in terms of execution time, power consumption, classification accuracy, and resource usage when mapped to Field-Programmable Gate Arrays (FPGAs) for the MNIST, Fashion-MNIST, SVHN, and CIFAR-10 data sets.</div></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"118 ","pages":"Article 105187"},"PeriodicalIF":2.6000,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DSL-based SNN accelerator design using Chisel\",\"authors\":\"Patrick Plagwitz , Frank Hannig , Jürgen Teich , Oliver Keszocze\",\"doi\":\"10.1016/j.micpro.2025.105187\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Neural Networks (NNs) are a very active field of research that also has wide-ranging applications in industry. An emerging type of NN that is promising for hardware acceleration and low energy requirements are Spiking Neural Networks (SNNs). But design automation in terms of accelerator circuit generation is still lacking proper search techniques for optimization of network parameters including the selection of proper neuron models and spike encodings. They are often restricted to implement a single network setting and/or a fixed hardware architecture.</div><div>In this paper, we present a novel multi-layer Domain-Specific Language (DSL) for constructing sequential circuits, including building blocks for pipelines supporting hazard detection. As the host language, we use Chisel, a hardware construction language allowing to express hardware at Register-Transfer Level and above. In contrast to applying High-Level Synthesis, we introduce a domain-specific language (DSL) for SNN accelerator design based on Chisel by defining building blocks for SNNs. After introducing this DSL, we present a full SNN accelerator generation framework that covers all phases, from training to deployment. Also proposed is a design space exploration for various SNN accelerator designs using different neuron models, their parametrizations as well as spike encodings. The generated designs are evaluated in terms of execution time, power consumption, classification accuracy, and resource usage when mapped to Field-Programmable Gate Arrays (FPGAs) for the MNIST, Fashion-MNIST, SVHN, and CIFAR-10 data sets.</div></div>\",\"PeriodicalId\":49815,\"journal\":{\"name\":\"Microprocessors and Microsystems\",\"volume\":\"118 \",\"pages\":\"Article 105187\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Microprocessors and Microsystems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0141933125000559\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Microprocessors and Microsystems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141933125000559","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Neural Networks (NNs) are a very active field of research that also has wide-ranging applications in industry. An emerging type of NN that is promising for hardware acceleration and low energy requirements are Spiking Neural Networks (SNNs). But design automation in terms of accelerator circuit generation is still lacking proper search techniques for optimization of network parameters including the selection of proper neuron models and spike encodings. They are often restricted to implement a single network setting and/or a fixed hardware architecture.
In this paper, we present a novel multi-layer Domain-Specific Language (DSL) for constructing sequential circuits, including building blocks for pipelines supporting hazard detection. As the host language, we use Chisel, a hardware construction language allowing to express hardware at Register-Transfer Level and above. In contrast to applying High-Level Synthesis, we introduce a domain-specific language (DSL) for SNN accelerator design based on Chisel by defining building blocks for SNNs. After introducing this DSL, we present a full SNN accelerator generation framework that covers all phases, from training to deployment. Also proposed is a design space exploration for various SNN accelerator designs using different neuron models, their parametrizations as well as spike encodings. The generated designs are evaluated in terms of execution time, power consumption, classification accuracy, and resource usage when mapped to Field-Programmable Gate Arrays (FPGAs) for the MNIST, Fashion-MNIST, SVHN, and CIFAR-10 data sets.
期刊介绍:
Microprocessors and Microsystems: Embedded Hardware Design (MICPRO) is a journal covering all design and architectural aspects related to embedded systems hardware. This includes different embedded system hardware platforms ranging from custom hardware via reconfigurable systems and application specific processors to general purpose embedded processors. Special emphasis is put on novel complex embedded architectures, such as systems on chip (SoC), systems on a programmable/reconfigurable chip (SoPC) and multi-processor systems on a chip (MPSoC), as well as, their memory and communication methods and structures, such as network-on-chip (NoC).
Design automation of such systems including methodologies, techniques, flows and tools for their design, as well as, novel designs of hardware components fall within the scope of this journal. Novel cyber-physical applications that use embedded systems are also central in this journal. While software is not in the main focus of this journal, methods of hardware/software co-design, as well as, application restructuring and mapping to embedded hardware platforms, that consider interplay between software and hardware components with emphasis on hardware, are also in the journal scope.