A more powerful random neural network model in supervised learning applications

Sebastián Basterrech, G. Rubino
{"title":"A more powerful random neural network model in supervised learning applications","authors":"Sebastián Basterrech, G. Rubino","doi":"10.1109/SOCPAR.2013.7054127","DOIUrl":null,"url":null,"abstract":"Since the early 1990s, Random Neural Networks (RNNs) have gained importance in the Neural Networks and Queueing Networks communities. RNNs are inspired by biological neural networks and they are also an extension of open Jackson's networks in Queueing Theory. In 1993, a learning algorithm of gradient type was introduced in order to use RNNs in supervised learning tasks. This method considers only the weight connections among the neurons as adjustable parameters. All other parameters are deemed fixed during the training process. The RNN model has been successfully utilized in several types of applications such as: supervised learning problems, pattern recognition, optimization, image processing, associative memory. In this contribution we present a modification of the classic model obtained by extending the set of adjustable parameters. The modification increases the potential of the RNN model in supervised learning tasks keeping the same network topology and the same time complexity of the algorithm. We describe the new equations implementing a gradient descent learning technique for the model.","PeriodicalId":315126,"journal":{"name":"2013 International Conference on Soft Computing and Pattern Recognition (SoCPaR)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 International Conference on Soft Computing and Pattern Recognition (SoCPaR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SOCPAR.2013.7054127","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Since the early 1990s, Random Neural Networks (RNNs) have gained importance in the Neural Networks and Queueing Networks communities. RNNs are inspired by biological neural networks and they are also an extension of open Jackson's networks in Queueing Theory. In 1993, a learning algorithm of gradient type was introduced in order to use RNNs in supervised learning tasks. This method considers only the weight connections among the neurons as adjustable parameters. All other parameters are deemed fixed during the training process. The RNN model has been successfully utilized in several types of applications such as: supervised learning problems, pattern recognition, optimization, image processing, associative memory. In this contribution we present a modification of the classic model obtained by extending the set of adjustable parameters. The modification increases the potential of the RNN model in supervised learning tasks keeping the same network topology and the same time complexity of the algorithm. We describe the new equations implementing a gradient descent learning technique for the model.
一个更强大的随机神经网络模型在监督学习中的应用
自20世纪90年代初以来,随机神经网络(RNNs)在神经网络和排队网络社区中获得了重要的地位。rnn受到生物神经网络的启发,也是队列理论中开放杰克逊网络的延伸。1993年,为了在监督学习任务中使用rnn,引入了一种梯度型学习算法。该方法只考虑神经元之间的权重连接作为可调参数。所有其他参数在训练过程中被认为是固定的。RNN模型已成功地应用于几种类型的应用,如:监督学习问题,模式识别,优化,图像处理,联想记忆。在这篇文章中,我们提出了通过扩展可调参数集而得到的经典模型的修改。该改进提高了RNN模型在保持相同网络拓扑和算法相同时间复杂度的监督学习任务中的潜力。我们描述了为模型实现梯度下降学习技术的新方程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信