The Generalization Ability of Artificial Neural Networks in Forecasting TCP/IP Traffic Trends: How Much Does the Size of Learning Rate Matter?

Q4 Computer Science
V. Moyo, K. Sibanda
{"title":"The Generalization Ability of Artificial Neural Networks in Forecasting TCP/IP Traffic Trends: How Much Does the Size of Learning Rate Matter?","authors":"V. Moyo, K. Sibanda","doi":"10.12783/ijcsa.2015.0401.02","DOIUrl":null,"url":null,"abstract":"Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fields. They have proved to be one of the most powerful tools in the domain of forecasting and analysis of various time series. The ability to model almost any kind of function regardless of its degree of nonlinearity, positions ANNs as good candidates for predicting and modelling self-similar time series such as TCP/IP traffic. Inspite of this, one of the most difficult and least understood tasks in the design of ANN models is the selection of the most appropriate size of the learning rate. Although some guidance in the form of heuristics is available for the choice of this parameter, none have been universally accepted. In this paper we empirically investigate various sizes of learning rates with the aim of determining the optimum learning rate size for generalization ability of an ANN trained on forecasting TCP/IP network traffic trends. MATLAB Version 7.4.0.287’s Neural Network toolbox version 5.0.2 (R2007a) was used for our experiments. The results are found to be promising in terms of ease of design and use of ANNs. We found from the experiments that, depending on the difficulty of the problem at hand, it is advisable to set the learning rate to 0.1 for the standard Backpropagation algorithm and to either 0.1 or 0.2 if used in conjunction with the momentum term of 0.5 or 0.6. We advise minimal use of the momentum term as it greatly interferes with the training process of ANNs. Although the information obtained from the tests carried out in this paper is specific to the problem considered, it provides users of Back-propagation networks with a valuable guide on the behaviour of ANNs under a wide range of operating conditions. It is important to note that the guidelines accrued from this paper are of an assistive and not necessarily restrictive nature to potential ANN modellers.","PeriodicalId":39465,"journal":{"name":"International Journal of Computer Science and Applications","volume":"35 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2015-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Science and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12783/ijcsa.2015.0401.02","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 3

Abstract

Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fields. They have proved to be one of the most powerful tools in the domain of forecasting and analysis of various time series. The ability to model almost any kind of function regardless of its degree of nonlinearity, positions ANNs as good candidates for predicting and modelling self-similar time series such as TCP/IP traffic. Inspite of this, one of the most difficult and least understood tasks in the design of ANN models is the selection of the most appropriate size of the learning rate. Although some guidance in the form of heuristics is available for the choice of this parameter, none have been universally accepted. In this paper we empirically investigate various sizes of learning rates with the aim of determining the optimum learning rate size for generalization ability of an ANN trained on forecasting TCP/IP network traffic trends. MATLAB Version 7.4.0.287’s Neural Network toolbox version 5.0.2 (R2007a) was used for our experiments. The results are found to be promising in terms of ease of design and use of ANNs. We found from the experiments that, depending on the difficulty of the problem at hand, it is advisable to set the learning rate to 0.1 for the standard Backpropagation algorithm and to either 0.1 or 0.2 if used in conjunction with the momentum term of 0.5 or 0.6. We advise minimal use of the momentum term as it greatly interferes with the training process of ANNs. Although the information obtained from the tests carried out in this paper is specific to the problem considered, it provides users of Back-propagation networks with a valuable guide on the behaviour of ANNs under a wide range of operating conditions. It is important to note that the guidelines accrued from this paper are of an assistive and not necessarily restrictive nature to potential ANN modellers.
人工神经网络在预测TCP/IP流量趋势中的泛化能力:学习率的大小有多大关系?
人工神经网络(Artificial Neural Networks, ann)越来越受到许多领域研究者的关注。它们已被证明是预测和分析各种时间序列领域中最强大的工具之一。无论其非线性程度如何,几乎可以对任何类型的函数进行建模的能力,使人工神经网络成为预测和建模自相似时间序列(如TCP/IP流量)的良好候选者。尽管如此,人工神经网络模型设计中最困难和最不容易理解的任务之一是选择最合适的学习率大小。虽然对于这个参数的选择有一些启发式的指导,但是没有一个是被普遍接受的。在本文中,我们实证研究了各种学习率的大小,目的是确定用于预测TCP/IP网络流量趋势的人工神经网络泛化能力的最佳学习率大小。我们的实验使用MATLAB Version 7.4.0.287的神经网络工具箱Version 5.0.2 (R2007a)。结果表明,人工神经网络在设计和使用方面是有希望的。我们从实验中发现,根据手头问题的难度,建议将标准反向传播算法的学习率设置为0.1,如果与动量项0.5或0.6结合使用,则设置为0.1或0.2。我们建议尽量少使用动量项,因为它极大地干扰了人工神经网络的训练过程。虽然从本文中进行的测试中获得的信息是特定于所考虑的问题的,但它为反向传播网络的用户提供了关于人工神经网络在广泛工作条件下的行为的有价值的指导。重要的是要注意,从本文中积累的指导方针对潜在的人工神经网络建模者来说是辅助的,而不一定是限制性的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Computer Science and Applications
International Journal of Computer Science and Applications Computer Science-Computer Science Applications
自引率
0.00%
发文量
0
期刊介绍: IJCSA is an international forum for scientists and engineers involved in computer science and its applications to publish high quality and refereed papers. Papers reporting original research and innovative applications from all parts of the world are welcome. Papers for publication in the IJCSA are selected through rigorous peer review to ensure originality, timeliness, relevance, and readability.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信