A Review of Weight Optimization Techniques in Recurrent Neural Networks

Alawi Alqushaibi, S. J. Abdulkadir, H. Rais, Qasem Al-Tashi
{"title":"A Review of Weight Optimization Techniques in Recurrent Neural Networks","authors":"Alawi Alqushaibi, S. J. Abdulkadir, H. Rais, Qasem Al-Tashi","doi":"10.1109/ICCI51257.2020.9247757","DOIUrl":null,"url":null,"abstract":"Recurrent neural network (RNN) has gained much attention from researchers working in the domain of time series data processing and proved to be an ideal choice for processing such data. As a result, several studies have been conducted on analyzing the time series data and data processing through a variety of RNN techniques. However, every type of RNN has its own flaws. Simple Recurrent Neural Networks (SRNN) are computationally less complex than other types of RNN such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). However, SRNN has some drawbacks such as vanishing gradient problem that makes it difficult to train when dealing with long term dependencies. The vanishing gradient exists during the training process of SRNN due to the multiplication of the gradient with small value when using the most traditional optimization algorithm the Gradient Decent (GD). Therefore, researches intend to overcome such limitations by utilizing weight optimized techniques such as metaheuristic algorithms. The objective of this paper is to present an extensive review of the challenges and issues of RNN weight optimization techniques and critically analyses the existing proposed techniques. The authors believed that the conducted review would serve as a main source of the techniques and methods used to resolve the problem of RNN time series data and data processing. Furthermore, current challenges and issues are deliberated to find promising research domains for further study.","PeriodicalId":194158,"journal":{"name":"2020 International Conference on Computational Intelligence (ICCI)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Computational Intelligence (ICCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCI51257.2020.9247757","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Recurrent neural network (RNN) has gained much attention from researchers working in the domain of time series data processing and proved to be an ideal choice for processing such data. As a result, several studies have been conducted on analyzing the time series data and data processing through a variety of RNN techniques. However, every type of RNN has its own flaws. Simple Recurrent Neural Networks (SRNN) are computationally less complex than other types of RNN such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). However, SRNN has some drawbacks such as vanishing gradient problem that makes it difficult to train when dealing with long term dependencies. The vanishing gradient exists during the training process of SRNN due to the multiplication of the gradient with small value when using the most traditional optimization algorithm the Gradient Decent (GD). Therefore, researches intend to overcome such limitations by utilizing weight optimized techniques such as metaheuristic algorithms. The objective of this paper is to present an extensive review of the challenges and issues of RNN weight optimization techniques and critically analyses the existing proposed techniques. The authors believed that the conducted review would serve as a main source of the techniques and methods used to resolve the problem of RNN time series data and data processing. Furthermore, current challenges and issues are deliberated to find promising research domains for further study.
递归神经网络权值优化技术综述
递归神经网络(RNN)已受到时间序列数据处理领域研究人员的广泛关注,是处理时间序列数据的理想选择。因此,通过各种RNN技术对时间序列数据的分析和数据处理进行了多项研究。然而,每种类型的RNN都有自己的缺陷。简单递归神经网络(SRNN)在计算上比其他类型的递归神经网络如长短期记忆(LSTM)和门控递归单元(GRU)更简单。然而,SRNN存在一些缺点,如梯度消失问题,使得在处理长期依赖关系时难以训练。在SRNN的训练过程中,使用最传统的梯度体面优化算法(gradient Decent, GD)时,由于梯度与小值的乘法,导致梯度消失。因此,研究人员打算利用元启发式算法等权重优化技术来克服这些限制。本文的目的是对RNN权重优化技术的挑战和问题进行广泛的回顾,并批判性地分析现有的建议技术。作者认为,所进行的综述将成为解决RNN时间序列数据和数据处理问题的技术和方法的主要来源。在此基础上,对当前面临的挑战和问题进行了探讨,以寻找有前景的研究领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信