待定情况下的在线最小二乘训练

R. Schultz, M. Hagan
{"title":"待定情况下的在线最小二乘训练","authors":"R. Schultz, M. Hagan","doi":"10.1109/IJCNN.1999.832665","DOIUrl":null,"url":null,"abstract":"We describe an online method of training neural networks, which is based on solving the linearized least-squares problem using the pseudo-inverse for the underdetermined case. This underdetermined linearized least squares (ULLS) method requires significantly less computation and memory for implementation than standard higher-order methods such as the Gauss-Newton method or extended Kalman filter. This decrease is possible because the method allows training to proceed with a smaller number of samples than parameters. Simulation results which compare the performance of the ULLS algorithm to the recursive linearized least squares algorithm (RLLS) and the gradient descent algorithm are presented. Results showing the impact on computational complexity and squared-error performance of the ULLS method, when the number of terms in the Jacobian matrix is varied, are also presented.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Online least-squares training for the underdetermined case\",\"authors\":\"R. Schultz, M. Hagan\",\"doi\":\"10.1109/IJCNN.1999.832665\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We describe an online method of training neural networks, which is based on solving the linearized least-squares problem using the pseudo-inverse for the underdetermined case. This underdetermined linearized least squares (ULLS) method requires significantly less computation and memory for implementation than standard higher-order methods such as the Gauss-Newton method or extended Kalman filter. This decrease is possible because the method allows training to proceed with a smaller number of samples than parameters. Simulation results which compare the performance of the ULLS algorithm to the recursive linearized least squares algorithm (RLLS) and the gradient descent algorithm are presented. Results showing the impact on computational complexity and squared-error performance of the ULLS method, when the number of terms in the Jacobian matrix is varied, are also presented.\",\"PeriodicalId\":157719,\"journal\":{\"name\":\"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1999-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1999.832665\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1999.832665","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们描述了一种在线训练神经网络的方法,该方法基于对欠确定情况使用伪逆求解线性化最小二乘问题。这种欠定线性最小二乘(ULLS)方法比高斯-牛顿方法或扩展卡尔曼滤波等标准高阶方法所需的计算量和内存要少得多。这种减少是可能的,因为该方法允许使用比参数更少的样本进行训练。仿真结果比较了ULLS算法与递归线性化最小二乘算法和梯度下降算法的性能。结果表明,当雅可比矩阵中的项数变化时,ULLS方法的计算复杂度和平方误差性能也会受到影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Online least-squares training for the underdetermined case
We describe an online method of training neural networks, which is based on solving the linearized least-squares problem using the pseudo-inverse for the underdetermined case. This underdetermined linearized least squares (ULLS) method requires significantly less computation and memory for implementation than standard higher-order methods such as the Gauss-Newton method or extended Kalman filter. This decrease is possible because the method allows training to proceed with a smaller number of samples than parameters. Simulation results which compare the performance of the ULLS algorithm to the recursive linearized least squares algorithm (RLLS) and the gradient descent algorithm are presented. Results showing the impact on computational complexity and squared-error performance of the ULLS method, when the number of terms in the Jacobian matrix is varied, are also presented.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信