{"title":"Extreme learning machine with multiple kernels","authors":"Li-juan Su, Min Yao","doi":"10.1109/ICCA.2013.6565148","DOIUrl":null,"url":null,"abstract":"Recently a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden layer feedforward neural networks (SLFNs). Compared with other traditional gradient-descent-based learning algorithms, ELM has shown promising results because it chooses weights and biases of hidden nodes randomly and obtains the output weights and biases analytically. In most cases, ELM is fast and presents good generalization, but we find that the stability and generalization performance still can be improved. In this paper, we propose a hybrid model which combines the advantage of ELM and the advantage of Bayesian “sum of kernels” model, named Extreme Learning Machine with Multiple Kernels (MK-ELM). This method optimizes the kernel function using a weighted sum of kernel functions by a prior knowledge. Experimental results show that this approach is able to make neural networks more robust and generates better generalization performance for both regression and classification applications.","PeriodicalId":336534,"journal":{"name":"2013 10th IEEE International Conference on Control and Automation (ICCA)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 10th IEEE International Conference on Control and Automation (ICCA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCA.2013.6565148","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
Recently a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden layer feedforward neural networks (SLFNs). Compared with other traditional gradient-descent-based learning algorithms, ELM has shown promising results because it chooses weights and biases of hidden nodes randomly and obtains the output weights and biases analytically. In most cases, ELM is fast and presents good generalization, but we find that the stability and generalization performance still can be improved. In this paper, we propose a hybrid model which combines the advantage of ELM and the advantage of Bayesian “sum of kernels” model, named Extreme Learning Machine with Multiple Kernels (MK-ELM). This method optimizes the kernel function using a weighted sum of kernel functions by a prior knowledge. Experimental results show that this approach is able to make neural networks more robust and generates better generalization performance for both regression and classification applications.