{"title":"A Distributed Learning Algorithm for RBF Neural Networks","authors":"Jing Dong, Liu Yang, Xiao-qing Luo","doi":"10.1109/ICCIA52886.2021.00059","DOIUrl":null,"url":null,"abstract":"Training a radial basis function (RBF) neural network on a single processor is usually challenging due to the limited computation and storage sources, especially for data with large and multi-dimensional features. In addition, in real applications, large-scale data may be collected in a distributed manner, which also makes it difficult to handle the data only with a single processor. To address these issues, we propose a distributed learning algorithm for RBF neural networks. In this algorithm, RBF neural networks can be trained in parallel using multiple processors. Specifically, the large-scale training data is divided into groups and each processor is associated with only one group. By introducing a shared output weight vector, training can be carried out simultaneously on different processors. The formulated optimization problem is addressed with alternating direction method of multipliers (ADMM) framework. Simulation results demonstrate the effectiveness of the proposed algorithm.","PeriodicalId":269269,"journal":{"name":"2021 6th International Conference on Computational Intelligence and Applications (ICCIA)","volume":"175 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 6th International Conference on Computational Intelligence and Applications (ICCIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIA52886.2021.00059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Training a radial basis function (RBF) neural network on a single processor is usually challenging due to the limited computation and storage sources, especially for data with large and multi-dimensional features. In addition, in real applications, large-scale data may be collected in a distributed manner, which also makes it difficult to handle the data only with a single processor. To address these issues, we propose a distributed learning algorithm for RBF neural networks. In this algorithm, RBF neural networks can be trained in parallel using multiple processors. Specifically, the large-scale training data is divided into groups and each processor is associated with only one group. By introducing a shared output weight vector, training can be carried out simultaneously on different processors. The formulated optimization problem is addressed with alternating direction method of multipliers (ADMM) framework. Simulation results demonstrate the effectiveness of the proposed algorithm.