{"title":"Comparative analysis of ELM and No-Prop algorithms","authors":"Abobakr Khalil Alshamiri, Alok Singh, R. Bapi","doi":"10.1109/IC3.2016.7880217","DOIUrl":null,"url":null,"abstract":"Extreme learning machine (ELM) is a learning method for training feedforward neural networks with randomized hidden layer(s). It initializes the weights of hidden neurons in a random manner and determines the output weights in an analytic manner by making use of Moore-Penrose (MP) generalized inverse. No-Prop algorithm is recently proposed training algorithm for feedforward neural networks in which the weights of the hidden neurons are randomly assigned and fixed, and the output weights are trained using least mean square error (LMS) algorithm. The difference between ELM and No-Prop lies in the way the output weights are trained. While ELM optimizes the output weights in batch mode using MP generalized inverse, No-Prop uses LMS gradient algorithm to train the output weights iteratively. In this paper, a comparative analysis based on empirical studies regarding the stability and convergence performance of ELM and No-Prop algorithms for data classification is provided.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Ninth International Conference on Contemporary Computing (IC3)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3.2016.7880217","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Extreme learning machine (ELM) is a learning method for training feedforward neural networks with randomized hidden layer(s). It initializes the weights of hidden neurons in a random manner and determines the output weights in an analytic manner by making use of Moore-Penrose (MP) generalized inverse. No-Prop algorithm is recently proposed training algorithm for feedforward neural networks in which the weights of the hidden neurons are randomly assigned and fixed, and the output weights are trained using least mean square error (LMS) algorithm. The difference between ELM and No-Prop lies in the way the output weights are trained. While ELM optimizes the output weights in batch mode using MP generalized inverse, No-Prop uses LMS gradient algorithm to train the output weights iteratively. In this paper, a comparative analysis based on empirical studies regarding the stability and convergence performance of ELM and No-Prop algorithms for data classification is provided.