Pattern Association For Character Recognition By Back-Propagation Algorithm Using Neural Network Approach

S. Kosbatwar
{"title":"Pattern Association For Character Recognition By Back-Propagation Algorithm Using Neural Network Approach","authors":"S. Kosbatwar","doi":"10.5121/IJCSES.2012.3112","DOIUrl":null,"url":null,"abstract":"The use of artificial neural network in applications can dramatically simplify the code and improve quality of recognition while achieving good performance. Another benefit of using neural network in application is extensibility of the system – ability to recognize more character sets than initially defined. Most of traditional systems are not extensible enough. In this paper recognition of characters is possible by using neural network back propagation algorithm. What is neural network Neural network are simplified models of the biological nervous system and therefore have drawn their motivation from the kind of computing performed by a human brain. An NN in general is a highly interconnected of a large number of processing elements called neurons in an architecture inspired by the brain. An NN can be massively parallel and therefore is said to exhibit parallel distributed processing. Neural Network exhibits characteristics such as mapping capabilities or pattern association, generalization, robustness, fault tolerance, and parallel and high speed information processing. Neural network learn by example. They can therefore be trained with known examples of a problem to acquire knowledge about it. Once appropriate trained the network can be put to effective use in solving ‘unknown’ or ‘untrained’ instances of the problem. Neural network adopt various learning mechanism of which supervised learning and unsupervised learning methods have turned out to be very popular. In supervised learning, a teacher is assumed to be present during the learning process, i.e. the network aims to minimize he error between target (desired) output presented by the teacher and the computed output to achieve better performance. However, in unsupervised learning, there is no teacher present to hand over the desired output and the network therefore tries to learn by itself, organizing the input instances of the problem.NN Architecture has been broadly classified as single layer feed forward networks, multilayer feed forward networks and recurrent networks, over the year several other NN.Architecture have evolved .some of the well known NN system include backpropogation network, perceptron, ADALINE ,Boltzmann machine ,adaptive resonance theory, Self-organized feature map, and Hopfield network. Neural Network has been successfully applied to problem in the field of pattern recognition, image processing, data compression, forecasting and optimization to quote a few. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.1, February 2012 128 Backpropagation algorithm The architecture of the neural network is the one of a basically backpropagation network with only one hidden layer (although it is the same techniques with more layers). The input layer is constituted of 35 neuron (one per input pixel in the matrix, of course)., they are 8 hidden neurons, and 26 output neurons(one per letter) in this problem domain of character recognition. The weight matrix gives the weight factor for each input of each neuron. These matrices are what we can call the memory of the neural network. The learning process is done by adjusting these weight so that for each given input the output is as near as possible of a wanted output (Here the full activation of the output neuron corresponding to the character to be recognized) [1]. The training patterns are applied in some random order one by one, and the weights are adjusted using the backpropagation learning law. Each application of the training set patterns is called a cycle. The patterns have to be applied for several training cycles to obtain the output error to an acceptable low value. Once the network is trained, it can be used to recall the appropriate pattern for a new input pattern. The computation for recall is straightforward, in the sense that the weights and the output functions of the units in different layers are used to compute the activation values and the output signals. The signals from the output layer correspond to the output[2]. Backpropagation learning emerged as the most significant result in the field of artificial neural networks. The backpropagation learning involves propagation of the error backwards from the output layer to the hidden layers in order to determine the update for the weights leading to the units in a hidden layer. The error at the output layer itself is computed using the difference between the desired output and the actual output at each of the output units. The actual output for a given input training pattern is determined by computing the outputs of units for each hidden layer in the forward pass of the input data. The error in the output is propagated backwards only to determine the weight updates [6].","PeriodicalId":415526,"journal":{"name":"International Journal of Computer Science & Engineering Survey","volume":"135 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Science & Engineering Survey","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/IJCSES.2012.3112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 31

Abstract

The use of artificial neural network in applications can dramatically simplify the code and improve quality of recognition while achieving good performance. Another benefit of using neural network in application is extensibility of the system – ability to recognize more character sets than initially defined. Most of traditional systems are not extensible enough. In this paper recognition of characters is possible by using neural network back propagation algorithm. What is neural network Neural network are simplified models of the biological nervous system and therefore have drawn their motivation from the kind of computing performed by a human brain. An NN in general is a highly interconnected of a large number of processing elements called neurons in an architecture inspired by the brain. An NN can be massively parallel and therefore is said to exhibit parallel distributed processing. Neural Network exhibits characteristics such as mapping capabilities or pattern association, generalization, robustness, fault tolerance, and parallel and high speed information processing. Neural network learn by example. They can therefore be trained with known examples of a problem to acquire knowledge about it. Once appropriate trained the network can be put to effective use in solving ‘unknown’ or ‘untrained’ instances of the problem. Neural network adopt various learning mechanism of which supervised learning and unsupervised learning methods have turned out to be very popular. In supervised learning, a teacher is assumed to be present during the learning process, i.e. the network aims to minimize he error between target (desired) output presented by the teacher and the computed output to achieve better performance. However, in unsupervised learning, there is no teacher present to hand over the desired output and the network therefore tries to learn by itself, organizing the input instances of the problem.NN Architecture has been broadly classified as single layer feed forward networks, multilayer feed forward networks and recurrent networks, over the year several other NN.Architecture have evolved .some of the well known NN system include backpropogation network, perceptron, ADALINE ,Boltzmann machine ,adaptive resonance theory, Self-organized feature map, and Hopfield network. Neural Network has been successfully applied to problem in the field of pattern recognition, image processing, data compression, forecasting and optimization to quote a few. International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.1, February 2012 128 Backpropagation algorithm The architecture of the neural network is the one of a basically backpropagation network with only one hidden layer (although it is the same techniques with more layers). The input layer is constituted of 35 neuron (one per input pixel in the matrix, of course)., they are 8 hidden neurons, and 26 output neurons(one per letter) in this problem domain of character recognition. The weight matrix gives the weight factor for each input of each neuron. These matrices are what we can call the memory of the neural network. The learning process is done by adjusting these weight so that for each given input the output is as near as possible of a wanted output (Here the full activation of the output neuron corresponding to the character to be recognized) [1]. The training patterns are applied in some random order one by one, and the weights are adjusted using the backpropagation learning law. Each application of the training set patterns is called a cycle. The patterns have to be applied for several training cycles to obtain the output error to an acceptable low value. Once the network is trained, it can be used to recall the appropriate pattern for a new input pattern. The computation for recall is straightforward, in the sense that the weights and the output functions of the units in different layers are used to compute the activation values and the output signals. The signals from the output layer correspond to the output[2]. Backpropagation learning emerged as the most significant result in the field of artificial neural networks. The backpropagation learning involves propagation of the error backwards from the output layer to the hidden layers in order to determine the update for the weights leading to the units in a hidden layer. The error at the output layer itself is computed using the difference between the desired output and the actual output at each of the output units. The actual output for a given input training pattern is determined by computing the outputs of units for each hidden layer in the forward pass of the input data. The error in the output is propagated backwards only to determine the weight updates [6].
基于神经网络反向传播算法的模式关联字符识别
在应用程序中使用人工神经网络可以大大简化代码,提高识别质量,同时获得良好的性能。在应用中使用神经网络的另一个好处是系统的可扩展性——能够识别比最初定义的更多的字符集。大多数传统系统的可扩展性都不够。本文采用神经网络反向传播算法对字符进行识别。什么是神经网络神经网络是生物神经系统的简化模型,因此从人类大脑进行的那种计算中得出了它们的动机。一般来说,神经网络是由大量被称为神经元的处理元素高度互连而成的,其架构受到大脑的启发。神经网络可以大规模并行,因此被称为并行分布式处理。神经网络表现出诸如映射能力或模式关联、泛化、鲁棒性、容错以及并行和高速信息处理等特征。神经网络通过实例学习。因此,他们可以用已知的问题实例进行训练,以获得有关该问题的知识。一旦经过适当的训练,网络就可以有效地用于解决“未知”或“未经训练”的问题实例。神经网络采用多种学习机制,其中有监督学习和无监督学习方法已成为非常流行的学习方法。在监督学习中,假设在学习过程中有老师在场,即网络的目标是最小化老师给出的目标(期望)输出与计算输出之间的误差,以获得更好的性能。然而,在无监督学习中,没有老师在场交出期望的输出,因此网络试图自己学习,组织问题的输入实例。神经网络架构大致分为单层前馈网络、多层前馈网络和循环网络,近年来又有几种神经网络。神经网络的结构已经发生了变化,一些著名的神经网络系统包括反向传播网络、感知器、ADALINE、玻尔兹曼机、自适应共振理论、自组织特征映射和Hopfield网络。神经网络已成功地应用于模式识别、图像处理、数据压缩、预测和优化等领域的问题。国际计算机科学与工程概论杂志(IJCSES) Vol.3, No.1, 2012年2月128反向传播算法神经网络的体系结构基本上是一个只有一个隐藏层的反向传播网络(尽管它是具有更多层的相同技术)。输入层由35个神经元组成(当然,矩阵中每个输入像素一个神经元)。在字符识别的问题域中,它们是8个隐藏神经元和26个输出神经元(每个字母一个)。权重矩阵给出了每个神经元的每个输入的权重因子。这些矩阵就是我们所说的神经网络的记忆。学习过程是通过调整这些权重来完成的,以便对于每个给定的输入,输出尽可能接近所需的输出(这里是要识别的字符对应的输出神经元的完全激活)[1]。训练模式按随机顺序依次应用,并利用反向传播学习规律调整权值。训练集模式的每次应用称为一个循环。这些模式必须应用几个训练周期才能使输出误差达到可接受的低值。一旦网络被训练,它就可以用来回忆新的输入模式的适当模式。召回的计算很简单,因为使用不同层中单元的权重和输出函数来计算激活值和输出信号。输出层的信号对应于输出端[2]。反向传播学习是人工神经网络领域最重要的研究成果。反向传播学习涉及将误差从输出层反向传播到隐藏层,以确定导致隐藏层中单元的权重的更新。输出层本身的误差是使用每个输出单元的期望输出和实际输出之间的差来计算的。给定输入训练模式的实际输出是通过计算输入数据前向传递中每个隐藏层的单位输出来确定的。输出中的错误被向后传播,只是为了确定权重更新[6]。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信