{"title":"具有Mittag-Leffler分布的分数阶极限学习机","authors":"Haoyu Niu, Yuquan Chen, Yangquan Chen","doi":"10.1115/detc2019-97652","DOIUrl":null,"url":null,"abstract":"\n Extreme Learning Machine (ELM) has a powerful capability to approximate the regression and classification problems for a lot of data. ELM does not need to learn parameters in hidden neurons, which enables ELM to learn a thousand times faster than conventional popular learning algorithms. Since the parameters in the hidden layers are randomly generated, what is the optimal randomness? Lévy distribution, a heavy-tailed distribution, has been shown to be the optimal randomness in an unknown environment for finding some targets. Thus, Lévy distribution is used to generate the parameters in the hidden layers (more likely to reach the optimal parameters) and better computational results are then derived. Since Lévy distribution is a special case of Mittag-Leffler distribution, in this paper, the Mittag-Leffler distribution is used in order to get better performance. We show the procedure of generating the Mittag-Leffler distribution and then the training algorithm using Mittag-Leffler distribution is given. The experimental result shows that the Mittag-Leffler distribution performs similarly as the Lévy distribution, both can reach better performance than the conventional method. Some detailed discussions are finally presented to explain the experimental results.","PeriodicalId":166402,"journal":{"name":"Volume 9: 15th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Fractional-Order Extreme Learning Machine With Mittag-Leffler Distribution\",\"authors\":\"Haoyu Niu, Yuquan Chen, Yangquan Chen\",\"doi\":\"10.1115/detc2019-97652\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Extreme Learning Machine (ELM) has a powerful capability to approximate the regression and classification problems for a lot of data. ELM does not need to learn parameters in hidden neurons, which enables ELM to learn a thousand times faster than conventional popular learning algorithms. Since the parameters in the hidden layers are randomly generated, what is the optimal randomness? Lévy distribution, a heavy-tailed distribution, has been shown to be the optimal randomness in an unknown environment for finding some targets. Thus, Lévy distribution is used to generate the parameters in the hidden layers (more likely to reach the optimal parameters) and better computational results are then derived. Since Lévy distribution is a special case of Mittag-Leffler distribution, in this paper, the Mittag-Leffler distribution is used in order to get better performance. We show the procedure of generating the Mittag-Leffler distribution and then the training algorithm using Mittag-Leffler distribution is given. The experimental result shows that the Mittag-Leffler distribution performs similarly as the Lévy distribution, both can reach better performance than the conventional method. Some detailed discussions are finally presented to explain the experimental results.\",\"PeriodicalId\":166402,\"journal\":{\"name\":\"Volume 9: 15th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Volume 9: 15th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1115/detc2019-97652\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 9: 15th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2019-97652","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fractional-Order Extreme Learning Machine With Mittag-Leffler Distribution
Extreme Learning Machine (ELM) has a powerful capability to approximate the regression and classification problems for a lot of data. ELM does not need to learn parameters in hidden neurons, which enables ELM to learn a thousand times faster than conventional popular learning algorithms. Since the parameters in the hidden layers are randomly generated, what is the optimal randomness? Lévy distribution, a heavy-tailed distribution, has been shown to be the optimal randomness in an unknown environment for finding some targets. Thus, Lévy distribution is used to generate the parameters in the hidden layers (more likely to reach the optimal parameters) and better computational results are then derived. Since Lévy distribution is a special case of Mittag-Leffler distribution, in this paper, the Mittag-Leffler distribution is used in order to get better performance. We show the procedure of generating the Mittag-Leffler distribution and then the training algorithm using Mittag-Leffler distribution is given. The experimental result shows that the Mittag-Leffler distribution performs similarly as the Lévy distribution, both can reach better performance than the conventional method. Some detailed discussions are finally presented to explain the experimental results.