Man Wu, Yan Chen, Yirong Kan, Takeshi Nomura, Renyuan Zhang, Y. Nakashima
{"title":"An Elastic Neural Network Toward Multi-Grained Re-configurable Accelerator","authors":"Man Wu, Yan Chen, Yirong Kan, Takeshi Nomura, Renyuan Zhang, Y. Nakashima","doi":"10.1109/newcas49341.2020.9159845","DOIUrl":null,"url":null,"abstract":"A bisection topology of neural networks (NN) is developed instead of conventional full connection (FC) fashion for NNs. Each neuron only communicates with two synapses from its previous neurons in adjacent, and outputs the data to two neurons in the post layer. A large amount of neurons and synapses are expected to symmetrically implement by the computational hardware in parallel. In this manner, the entire network can be partitioned into arbitrary diamond-shaped pieces (seen as DiaNet) for behaving the NN functions without any redundancy theoretically. Assuming such topology is implemented on-chip in parallel, the DiaNets perform multi-grained re-configuration to offer flexible function units. Various behaviors of conventional NNs are efficiently retrieved by the proposed DiaNet topology while maintaining high fidelity of results. Also, two optimization technologies such as overlapping and reshaping are proposed to further reduce the synapses. On the Wine dataset, our results show that the number of synapses is reduced to 36.3% without accuracy loss. Finally, the bit precision of the DiaNet is investigated to suggest the guideline toward efficient hardware implementations.","PeriodicalId":135163,"journal":{"name":"2020 18th IEEE International New Circuits and Systems Conference (NEWCAS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 18th IEEE International New Circuits and Systems Conference (NEWCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/newcas49341.2020.9159845","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
A bisection topology of neural networks (NN) is developed instead of conventional full connection (FC) fashion for NNs. Each neuron only communicates with two synapses from its previous neurons in adjacent, and outputs the data to two neurons in the post layer. A large amount of neurons and synapses are expected to symmetrically implement by the computational hardware in parallel. In this manner, the entire network can be partitioned into arbitrary diamond-shaped pieces (seen as DiaNet) for behaving the NN functions without any redundancy theoretically. Assuming such topology is implemented on-chip in parallel, the DiaNets perform multi-grained re-configuration to offer flexible function units. Various behaviors of conventional NNs are efficiently retrieved by the proposed DiaNet topology while maintaining high fidelity of results. Also, two optimization technologies such as overlapping and reshaping are proposed to further reduce the synapses. On the Wine dataset, our results show that the number of synapses is reduced to 36.3% without accuracy loss. Finally, the bit precision of the DiaNet is investigated to suggest the guideline toward efficient hardware implementations.