多层前馈神经网络的快速学习策略

Huawei Chen, Hualan Zhong, H. Yuan, F. Jin
{"title":"多层前馈神经网络的快速学习策略","authors":"Huawei Chen, Hualan Zhong, H. Yuan, F. Jin","doi":"10.1109/WCICA.2006.1712920","DOIUrl":null,"url":null,"abstract":"This paper proposes a new training algorithm called bi-phases weights' adjusting (BPWA) for feedforward neural networks. Unlike BP learning algorithm, BPWA can adjust the weights during both forward phase and backward phase. The algorithm computes the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass, on the other hand, adjusts other weights in the network according to error gradient descent method. The experimental results based on function approximation and classification tasks show that new algorithm is able to achieve faster converging speed with good generalization performance when compared with the BP and Levenberg-Marquardt BP algorithm","PeriodicalId":375135,"journal":{"name":"2006 6th World Congress on Intelligent Control and Automation","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Fast Learning Strategy for Multilayer Feedforward Neural Networks\",\"authors\":\"Huawei Chen, Hualan Zhong, H. Yuan, F. Jin\",\"doi\":\"10.1109/WCICA.2006.1712920\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a new training algorithm called bi-phases weights' adjusting (BPWA) for feedforward neural networks. Unlike BP learning algorithm, BPWA can adjust the weights during both forward phase and backward phase. The algorithm computes the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass, on the other hand, adjusts other weights in the network according to error gradient descent method. The experimental results based on function approximation and classification tasks show that new algorithm is able to achieve faster converging speed with good generalization performance when compared with the BP and Levenberg-Marquardt BP algorithm\",\"PeriodicalId\":375135,\"journal\":{\"name\":\"2006 6th World Congress on Intelligent Control and Automation\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2006 6th World Congress on Intelligent Control and Automation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WCICA.2006.1712920\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 6th World Congress on Intelligent Control and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WCICA.2006.1712920","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

提出了一种新的前馈神经网络训练算法——双阶段权值调整算法。与BP学习算法不同的是,BP算法在前向和后向阶段都可以调整权重。该算法在正向传递中计算最小范数平方解作为隐藏层与输出层之间的权值,而在反向传递中,则根据误差梯度下降法调整网络中的其他权值。基于函数逼近和分类任务的实验结果表明,与BP和Levenberg-Marquardt BP算法相比,新算法具有更快的收敛速度和良好的泛化性能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Fast Learning Strategy for Multilayer Feedforward Neural Networks
This paper proposes a new training algorithm called bi-phases weights' adjusting (BPWA) for feedforward neural networks. Unlike BP learning algorithm, BPWA can adjust the weights during both forward phase and backward phase. The algorithm computes the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass, on the other hand, adjusts other weights in the network according to error gradient descent method. The experimental results based on function approximation and classification tasks show that new algorithm is able to achieve faster converging speed with good generalization performance when compared with the BP and Levenberg-Marquardt BP algorithm
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信