{"title":"深度神经网络中Softmax层的高效硬件结构","authors":"Ruofei Hu, Binren Tian, S. Yin, Shaojun Wei","doi":"10.1109/ICDSP.2018.8631588","DOIUrl":null,"url":null,"abstract":"Deep neural network (DNN), as a very important machine learning technique in classification and detection tasks for images, video, speech as wellas audio, has recently received tremendous attention. Integral Stochastic Computation (Integral SC), on the other hand, has proved its extraordinary ability in hardware implementation of DNNs. Thesoftmax layer is generally used in multi-classification tasks as a very basic and important network layer in DNNs. However, the hardware implementation of softmax layer is expensive cause the exponentiation and division computation. In this paper, we designed an efficient way to simulate softmax layer in DNNs based on Integral stochastic computing, filling the vacancy of previous academic works. Compared to conventional softmax hardware implementation, our method achieves reduction in power and area by 68% and 41%, respectively.","PeriodicalId":218806,"journal":{"name":"2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Efficient Hardware Architecture of Softmax Layer in Deep Neural Network\",\"authors\":\"Ruofei Hu, Binren Tian, S. Yin, Shaojun Wei\",\"doi\":\"10.1109/ICDSP.2018.8631588\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural network (DNN), as a very important machine learning technique in classification and detection tasks for images, video, speech as wellas audio, has recently received tremendous attention. Integral Stochastic Computation (Integral SC), on the other hand, has proved its extraordinary ability in hardware implementation of DNNs. Thesoftmax layer is generally used in multi-classification tasks as a very basic and important network layer in DNNs. However, the hardware implementation of softmax layer is expensive cause the exponentiation and division computation. In this paper, we designed an efficient way to simulate softmax layer in DNNs based on Integral stochastic computing, filling the vacancy of previous academic works. Compared to conventional softmax hardware implementation, our method achieves reduction in power and area by 68% and 41%, respectively.\",\"PeriodicalId\":218806,\"journal\":{\"name\":\"2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)\",\"volume\":\"76 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDSP.2018.8631588\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSP.2018.8631588","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
摘要
深度神经网络(Deep neural network, DNN)作为一种非常重要的机器学习技术,在图像、视频、语音和音频的分类和检测任务中得到了广泛的关注。另一方面,积分随机计算(Integral random Computation, SC)在深度神经网络的硬件实现中已经证明了其非凡的能力。oftmax层通常用于多分类任务中,是dnn中非常基础和重要的网络层。然而,softmax层的硬件实现开销很大,因为它需要进行乘除运算。本文设计了一种基于积分随机计算的深度神经网络中softmax层的有效模拟方法,填补了前人研究的空白。与传统的softmax硬件实现相比,我们的方法分别减少了68%和41%的功耗和面积。
Efficient Hardware Architecture of Softmax Layer in Deep Neural Network
Deep neural network (DNN), as a very important machine learning technique in classification and detection tasks for images, video, speech as wellas audio, has recently received tremendous attention. Integral Stochastic Computation (Integral SC), on the other hand, has proved its extraordinary ability in hardware implementation of DNNs. Thesoftmax layer is generally used in multi-classification tasks as a very basic and important network layer in DNNs. However, the hardware implementation of softmax layer is expensive cause the exponentiation and division computation. In this paper, we designed an efficient way to simulate softmax layer in DNNs based on Integral stochastic computing, filling the vacancy of previous academic works. Compared to conventional softmax hardware implementation, our method achieves reduction in power and area by 68% and 41%, respectively.