{"title":"A brief discussion on moderatism based local gradient learning rules","authors":"M. T. Islam, Y. Okabe","doi":"10.1109/ISSPA.2003.1224858","DOIUrl":null,"url":null,"abstract":"Moderatism [Y. Okabe et al., 1988], which is a learning rule for ANNs, is based on the principle that individual neurons and neural nets as a whole try to sustain a \"moderate\" level in their input and output signals. In this way, a close mutual relationship with the outside environment is maintained. In this paper, two potential moderatism-based local, gradient learning rules are proposed. Then, a pattern learning experiment is performed to compare the learning performances of these two learning rules, the error based weight update (EBWU) rule [Tanvir Islam, M et al., December 2001][Tanvir Islam, M et al., September 2001], and error backpropagation [Bishop, CM et al., 1995].","PeriodicalId":264814,"journal":{"name":"Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings.","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSPA.2003.1224858","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Moderatism [Y. Okabe et al., 1988], which is a learning rule for ANNs, is based on the principle that individual neurons and neural nets as a whole try to sustain a "moderate" level in their input and output signals. In this way, a close mutual relationship with the outside environment is maintained. In this paper, two potential moderatism-based local, gradient learning rules are proposed. Then, a pattern learning experiment is performed to compare the learning performances of these two learning rules, the error based weight update (EBWU) rule [Tanvir Islam, M et al., December 2001][Tanvir Islam, M et al., September 2001], and error backpropagation [Bishop, CM et al., 1995].
Moderatism [Y。Okabe et al., 1988],这是一个人工神经网络的学习规则,它基于单个神经元和神经网络作为一个整体试图在其输入和输出信号中维持一个“适度”水平的原则。通过这种方式,与外部环境保持密切的相互关系。本文提出了两种基于潜在现代主义的局部梯度学习规则。然后,进行模式学习实验,比较基于误差的权重更新(EBWU)规则[Tanvir Islam, M et al., December 2001][Tanvir Islam, M et al., September 2001]和误差反向传播[Bishop, CM et al., 1995]这两种学习规则的学习性能。