{"title":"基于二值化和随机计算的神经网络容错性","authors":"Amir Ardakani, A. Ardakani, W. Gross","doi":"10.1109/SiPS52927.2021.00018","DOIUrl":null,"url":null,"abstract":"Both binarized and stochastic computing-based neural networks exploit bit-wise operations to replace expensive full-precision multiplications with simple XNOR gates and thus, offer low-cost hardware implementation. In stochastic computing, arithmetic computations are performed on sequences of random bits which can approximate any real values. Stochastic computing-based neural networks benefit from approximate computing and promote fault-tolerant architectures against soft errors in noisy environments. On the other hand, in binarized neural networks, real values are deterministically binarized using the sign function. As a result, any bit-flip in the binarized values dramatically changes the outcome of arithmetic computations and makes binarized neural networks more vulnerable against soft errors. In this paper, we compare these two neural networks against each other in terms of fault-tolerance and hardware complexity (i.e., area and energy efficiency).","PeriodicalId":103894,"journal":{"name":"2021 IEEE Workshop on Signal Processing Systems (SiPS)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Fault-Tolerance of Binarized and Stochastic Computing-based Neural Networks\",\"authors\":\"Amir Ardakani, A. Ardakani, W. Gross\",\"doi\":\"10.1109/SiPS52927.2021.00018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Both binarized and stochastic computing-based neural networks exploit bit-wise operations to replace expensive full-precision multiplications with simple XNOR gates and thus, offer low-cost hardware implementation. In stochastic computing, arithmetic computations are performed on sequences of random bits which can approximate any real values. Stochastic computing-based neural networks benefit from approximate computing and promote fault-tolerant architectures against soft errors in noisy environments. On the other hand, in binarized neural networks, real values are deterministically binarized using the sign function. As a result, any bit-flip in the binarized values dramatically changes the outcome of arithmetic computations and makes binarized neural networks more vulnerable against soft errors. In this paper, we compare these two neural networks against each other in terms of fault-tolerance and hardware complexity (i.e., area and energy efficiency).\",\"PeriodicalId\":103894,\"journal\":{\"name\":\"2021 IEEE Workshop on Signal Processing Systems (SiPS)\",\"volume\":\"99 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Workshop on Signal Processing Systems (SiPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SiPS52927.2021.00018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Workshop on Signal Processing Systems (SiPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SiPS52927.2021.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fault-Tolerance of Binarized and Stochastic Computing-based Neural Networks
Both binarized and stochastic computing-based neural networks exploit bit-wise operations to replace expensive full-precision multiplications with simple XNOR gates and thus, offer low-cost hardware implementation. In stochastic computing, arithmetic computations are performed on sequences of random bits which can approximate any real values. Stochastic computing-based neural networks benefit from approximate computing and promote fault-tolerant architectures against soft errors in noisy environments. On the other hand, in binarized neural networks, real values are deterministically binarized using the sign function. As a result, any bit-flip in the binarized values dramatically changes the outcome of arithmetic computations and makes binarized neural networks more vulnerable against soft errors. In this paper, we compare these two neural networks against each other in terms of fault-tolerance and hardware complexity (i.e., area and energy efficiency).