{"title":"RAM-Based Neural Network Parallel Implementation on a Reconfigurable Platform and Its Application for Handwritten Digits Recognition","authors":"Shefa A. Dawwd, A. Al-Saegh","doi":"10.33899/rengj.2015.101082","DOIUrl":null,"url":null,"abstract":"Artificial neural networks (ANNs) are widely used in different areas of nowadays applications. Many challenges are imposed on the practical implementation of ANNs. Some of them are: the number of samples required to train the network; the number of adders, multipliers, nonlinear transfer functions, storage elements; and the speed of calculations in either training phase or recall phase. In this paper, the RAM-based neural network is investigated. No weights, adders, multipliers, transfer functions are required to implement it neither in hardware nor in software, but at a cost of large RAM utilization. In addition, a small number of samples are required for training. However, in hardware implementation, a large size of memory is required to train it. The network is implemented on the FPGA platform. The Stratix IV GX FPGA development board, which is provided on large on board RAM, is used. A considerable speedup of 237 is achieved in either training or recalling phases. A comparable error rate of 7.6 is achieved when MNIST (Mixed National Institute of Standards and Technology) database are used to train the network on handwritten digit recognition.","PeriodicalId":339890,"journal":{"name":"AL Rafdain Engineering Journal","volume":"167 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AL Rafdain Engineering Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33899/rengj.2015.101082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial neural networks (ANNs) are widely used in different areas of nowadays applications. Many challenges are imposed on the practical implementation of ANNs. Some of them are: the number of samples required to train the network; the number of adders, multipliers, nonlinear transfer functions, storage elements; and the speed of calculations in either training phase or recall phase. In this paper, the RAM-based neural network is investigated. No weights, adders, multipliers, transfer functions are required to implement it neither in hardware nor in software, but at a cost of large RAM utilization. In addition, a small number of samples are required for training. However, in hardware implementation, a large size of memory is required to train it. The network is implemented on the FPGA platform. The Stratix IV GX FPGA development board, which is provided on large on board RAM, is used. A considerable speedup of 237 is achieved in either training or recalling phases. A comparable error rate of 7.6 is achieved when MNIST (Mixed National Institute of Standards and Technology) database are used to train the network on handwritten digit recognition.
人工神经网络在当今应用的各个领域都有广泛的应用。人工神经网络的实际应用面临许多挑战。其中包括:训练网络所需的样本数量;加法器、乘法器、非线性传递函数、存储元件的数量;以及训练阶段和回忆阶段的计算速度。本文研究了基于ram的神经网络。在硬件和软件中都不需要权重、加法器、乘数、传递函数来实现它,但代价是大量的RAM利用率。另外,需要少量的样本进行训练。然而,在硬件实现中,需要大量的内存来训练它。该网络在FPGA平台上实现。使用Stratix IV GX FPGA开发板,该开发板提供了大板上RAM。无论是在训练阶段还是在回顾阶段,都实现了237的相当大的加速。当使用MNIST (Mixed National Institute of Standards and Technology)数据库训练网络进行手写数字识别时,错误率为7.6。