Automatically Avoiding Overfitting in Deep Neural Networks by Using Hyper-Parameters Optimization Methods

Zahraa Saddi Kadhim, Hasanen S. Abdullah, K. I. Ghathwan
{"title":"Automatically Avoiding Overfitting in Deep Neural Networks by Using Hyper-Parameters Optimization Methods","authors":"Zahraa Saddi Kadhim, Hasanen S. Abdullah, K. I. Ghathwan","doi":"10.3991/ijoe.v19i05.38153","DOIUrl":null,"url":null,"abstract":"Overfitting is one issue that deep learning faces in particular. It leads to highly accurate classification results, but they are fraudulent. As a result, if the overfitting problem is not fully resolved, systems that rely on prediction or recognition and are sensitive to accuracy will produce untrustworthy results. All prior suggestions helped to lessen this issue but fell short of eliminating it entirely while maintaining crucial data. This paper proposes a novel approach to guarantee the preservation of critical data while eliminating overfitting completely. Numeric and image datasets are employed in two types of networks: convolutional and deep neural networks. Following the usage of three regularization techniques (L1, L2, and dropout), apply two optimization algorithms (Bayesian and random search), allowing them to select the hyperparameters automatically, with regularization techniques being one of the hyperparameters that are automatically selected. The obtained results, in addition to completely eliminating the overfitting issue, showed that the accuracy of the image data was 97.82% and 90.72 % when using Bayesian and random search techniques, respectively, and was 95.3 % and 96.5 % when using the same algorithms with a numeric dataset. \n  \n  \n ","PeriodicalId":247144,"journal":{"name":"Int. J. Online Biomed. Eng.","volume":"576 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Online Biomed. Eng.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3991/ijoe.v19i05.38153","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Overfitting is one issue that deep learning faces in particular. It leads to highly accurate classification results, but they are fraudulent. As a result, if the overfitting problem is not fully resolved, systems that rely on prediction or recognition and are sensitive to accuracy will produce untrustworthy results. All prior suggestions helped to lessen this issue but fell short of eliminating it entirely while maintaining crucial data. This paper proposes a novel approach to guarantee the preservation of critical data while eliminating overfitting completely. Numeric and image datasets are employed in two types of networks: convolutional and deep neural networks. Following the usage of three regularization techniques (L1, L2, and dropout), apply two optimization algorithms (Bayesian and random search), allowing them to select the hyperparameters automatically, with regularization techniques being one of the hyperparameters that are automatically selected. The obtained results, in addition to completely eliminating the overfitting issue, showed that the accuracy of the image data was 97.82% and 90.72 % when using Bayesian and random search techniques, respectively, and was 95.3 % and 96.5 % when using the same algorithms with a numeric dataset.      
利用超参数优化方法自动避免深度神经网络过拟合
过度拟合是深度学习面临的一个特别的问题。它导致高度准确的分类结果,但它们是欺诈性的。因此,如果没有完全解决过拟合问题,依赖于预测或识别,对准确性敏感的系统将产生不可信的结果。之前的所有建议都有助于减少这个问题,但在保留关键数据的同时,无法完全消除这个问题。本文提出了一种新的方法来保证关键数据的保存,同时完全消除过拟合。数字和图像数据集用于两种类型的网络:卷积和深度神经网络。在使用了三种正则化技术(L1、L2和dropout)之后,应用两种优化算法(贝叶斯和随机搜索),允许它们自动选择超参数,其中正则化技术是自动选择的超参数之一。所获得的结果,除了完全消除了过拟合问题外,使用贝叶斯和随机搜索技术时,图像数据的准确率分别为97.82%和90.72%,在数字数据集上使用相同算法时,准确率分别为95.3%和96.5%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信