利用动态粒子群优化-反向传播学习人工神经网络:经验评价与比较

Swagatika Devi, A. Jagadev, S. Patnaik
{"title":"利用动态粒子群优化-反向传播学习人工神经网络:经验评价与比较","authors":"Swagatika Devi, A. Jagadev, S. Patnaik","doi":"10.6109/jicce.2015.13.2.123","DOIUrl":null,"url":null,"abstract":"Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input?output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison\",\"authors\":\"Swagatika Devi, A. Jagadev, S. Patnaik\",\"doi\":\"10.6109/jicce.2015.13.2.123\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input?output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.\",\"PeriodicalId\":272551,\"journal\":{\"name\":\"J. Inform. and Commun. Convergence Engineering\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"J. Inform. and Commun. Convergence Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.6109/jicce.2015.13.2.123\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Inform. and Commun. Convergence Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.6109/jicce.2015.13.2.123","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

训练神经网络是监督学习领域中一项非常重要的复杂任务。在训练过程中,一组输入?输出模式重复到人工神经网络(ANN)。从这些模式中,所有神经元之间互连的权重被调整,直到指定的输入产生期望的输出。本文提出了一种新的神经网络连接权全局优化的混合算法。动态群体在全局搜索的初始阶段收敛迅速,但在全局最优时,搜索过程变得非常缓慢。相比之下,梯度下降法在全局最优周围的收敛速度更快,同时收敛精度也相对较高。因此,本文提出的混合算法将动态粒子群优化(DPSO)算法与反向传播(BP)算法(也称为DPSO-BP算法)相结合,来训练神经网络的权值。在本文中,我们打算证明所提出的混合算法(DPSO-BP)在神经网络训练中优于其他更标准的算法(时间性能和解的质量)。用两种不同的数据集对算法进行了比较,并对结果进行了仿真。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison
Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input?output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信