低功耗可靠神经网络的自检剩余数系统

Tsung-Chu Huang
{"title":"低功耗可靠神经网络的自检剩余数系统","authors":"Tsung-Chu Huang","doi":"10.1109/ATS47505.2019.000-3","DOIUrl":null,"url":null,"abstract":"Neural Network suffers four major issues including acceleration, power consumption, area overhead and fault tolerance. In this paper we develop a systematic approach to design a low-power, compact, fast and reliable neural network based on a redundant residue number system. Residue number systems have been applied in designing neural network except the CORDIC-based activation functions including hypertangent, logistic and softmax functions. This issue results in that the entire neural network cannot be totally self-checked and extra operations make the time, power and area reductions wasted. In our systematic approach we propose some design rules for ensuring the checking rate without loss of reductions in time, area and power consumption. From experiments on three neu-ral network with 24-bit fixed-point operations for the MNIST handwritten digit data set, 3, 4, and 5 moduli are separately employed for achieving all balanced improvements in power-saving, area-reduction, speed-acceleration and reliability pro-motion. Experimental results show that all the power, time and area can be reduced to only about one third, and the entire network in any combination of software and hardware can be self-checked in an aliasing rate of only 0.39% and TMR-correctable under the single-residue fault model.","PeriodicalId":258824,"journal":{"name":"2019 IEEE 28th Asian Test Symposium (ATS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Self-Checking Residue Number System for Low-Power Reliable Neural Network\",\"authors\":\"Tsung-Chu Huang\",\"doi\":\"10.1109/ATS47505.2019.000-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural Network suffers four major issues including acceleration, power consumption, area overhead and fault tolerance. In this paper we develop a systematic approach to design a low-power, compact, fast and reliable neural network based on a redundant residue number system. Residue number systems have been applied in designing neural network except the CORDIC-based activation functions including hypertangent, logistic and softmax functions. This issue results in that the entire neural network cannot be totally self-checked and extra operations make the time, power and area reductions wasted. In our systematic approach we propose some design rules for ensuring the checking rate without loss of reductions in time, area and power consumption. From experiments on three neu-ral network with 24-bit fixed-point operations for the MNIST handwritten digit data set, 3, 4, and 5 moduli are separately employed for achieving all balanced improvements in power-saving, area-reduction, speed-acceleration and reliability pro-motion. Experimental results show that all the power, time and area can be reduced to only about one third, and the entire network in any combination of software and hardware can be self-checked in an aliasing rate of only 0.39% and TMR-correctable under the single-residue fault model.\",\"PeriodicalId\":258824,\"journal\":{\"name\":\"2019 IEEE 28th Asian Test Symposium (ATS)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 28th Asian Test Symposium (ATS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ATS47505.2019.000-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 28th Asian Test Symposium (ATS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ATS47505.2019.000-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

神经网络存在四个主要问题:加速、功耗、面积开销和容错性。本文提出了一种基于冗余余数系统的低功耗、紧凑、快速、可靠的神经网络设计方法。除了基于cordicc的激活函数(hypertangent、logistic和softmax)外,残数系统还应用于神经网络的设计。这个问题导致整个神经网络不能完全自检,额外的操作浪费了时间、功率和面积。在我们的系统方法中,我们提出了一些设计规则,以确保检查率而不损失时间,面积和功耗的减少。通过对MNIST手写数字数据集进行24位不动点运算的三个神经网络实验,分别采用3、4、5个模来实现节能、减面积、速度加速和可靠性提升的所有平衡改进。实验结果表明,该方法可以将所有的功率、时间和面积减少到三分之一左右,并且在任意软硬件组合的情况下,整个网络都可以在混叠率仅为0.39%的情况下进行自检,并且在单残差故障模型下tmr可校正。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Self-Checking Residue Number System for Low-Power Reliable Neural Network
Neural Network suffers four major issues including acceleration, power consumption, area overhead and fault tolerance. In this paper we develop a systematic approach to design a low-power, compact, fast and reliable neural network based on a redundant residue number system. Residue number systems have been applied in designing neural network except the CORDIC-based activation functions including hypertangent, logistic and softmax functions. This issue results in that the entire neural network cannot be totally self-checked and extra operations make the time, power and area reductions wasted. In our systematic approach we propose some design rules for ensuring the checking rate without loss of reductions in time, area and power consumption. From experiments on three neu-ral network with 24-bit fixed-point operations for the MNIST handwritten digit data set, 3, 4, and 5 moduli are separately employed for achieving all balanced improvements in power-saving, area-reduction, speed-acceleration and reliability pro-motion. Experimental results show that all the power, time and area can be reduced to only about one third, and the entire network in any combination of software and hardware can be self-checked in an aliasing rate of only 0.39% and TMR-correctable under the single-residue fault model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信