Renyuan Zhang, Yan Chen, Takashi Nakada, Y. Nakashima
{"title":"DiaNet: An Efficient Multi-Grained Re-configurable Neural Network in Silicon","authors":"Renyuan Zhang, Yan Chen, Takashi Nakada, Y. Nakashima","doi":"10.1109/SOCC46988.2019.1570548015","DOIUrl":null,"url":null,"abstract":"A hardware friendly topology of neural network is proposed in this work. Instead of full connections between neighbor layers, the bisection-propagation from “parents” to “twins” is performed to retrieve the behaviors of conventional neural network. In this manner, the conventional dense-butshallow topology is organized in sparse-but-deep fashion. A large scale of synapses and neurons array is symmetrically designed with VLSI circuits on-chip. According to specific application demands, the entire array is cut into arbitrary diamond-shape pieces without redundant synapses. Each diamond-cut behaves as an independent neural network for corresponding tasks in fully parallel. Namely, the proposed network-on-chip is multigrained re-configurable by configuring synapse and neuron behavior (fine-grained), reshaping the diamond-cut (mediumgrained), and organizing multiple DiaNets (coarse-grained). To carry out the synapse and neuron computations, a set of analog calculation circuits is designed with 80 MOS transistors for one processing unit including two synapses and one neuron in dual activation-modes of sigmoid and rectified linear function. For proof-of-concept, several case studies of regression tasks with one-, two, and nine-variables are implemented by the proposed network. From the circuit simulation results, all the demonstrated regressions are executed by the compact hardware resource of 720 MOS transistors with the maximum power consumption of 19:4%W. The regression error is about 4:2%, 4:3%, and 1:2% for one-, two-, and nine-variable examples, respectively.","PeriodicalId":253998,"journal":{"name":"2019 32nd IEEE International System-on-Chip Conference (SOCC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 32nd IEEE International System-on-Chip Conference (SOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SOCC46988.2019.1570548015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
A hardware friendly topology of neural network is proposed in this work. Instead of full connections between neighbor layers, the bisection-propagation from “parents” to “twins” is performed to retrieve the behaviors of conventional neural network. In this manner, the conventional dense-butshallow topology is organized in sparse-but-deep fashion. A large scale of synapses and neurons array is symmetrically designed with VLSI circuits on-chip. According to specific application demands, the entire array is cut into arbitrary diamond-shape pieces without redundant synapses. Each diamond-cut behaves as an independent neural network for corresponding tasks in fully parallel. Namely, the proposed network-on-chip is multigrained re-configurable by configuring synapse and neuron behavior (fine-grained), reshaping the diamond-cut (mediumgrained), and organizing multiple DiaNets (coarse-grained). To carry out the synapse and neuron computations, a set of analog calculation circuits is designed with 80 MOS transistors for one processing unit including two synapses and one neuron in dual activation-modes of sigmoid and rectified linear function. For proof-of-concept, several case studies of regression tasks with one-, two, and nine-variables are implemented by the proposed network. From the circuit simulation results, all the demonstrated regressions are executed by the compact hardware resource of 720 MOS transistors with the maximum power consumption of 19:4%W. The regression error is about 4:2%, 4:3%, and 1:2% for one-, two-, and nine-variable examples, respectively.