An effective Reinforcement Learning method for preventing the overfitting of Convolutional Neural Networks

Ali Mahdavi-Hormat, Mohammad Bagher Menhaj, Ashkan Shakarami
{"title":"An effective Reinforcement Learning method for preventing the overfitting of Convolutional Neural Networks","authors":"Ali Mahdavi-Hormat,&nbsp;Mohammad Bagher Menhaj,&nbsp;Ashkan Shakarami","doi":"10.1007/s43674-022-00046-8","DOIUrl":null,"url":null,"abstract":"<div><p>Convolutional Neural Networks are machine learning models that have proven abilities in many variants of tasks. This powerful machine learning model sometimes suffers from overfitting. This paper proposes a method based on Reinforcement Learning for addressing this problem. In this research, the parameters of a target layer in the Convolutional Neural Network take as a state for the Agent of the Reinforcement Learning section. Then the Agent gives some actions as forming parameters of a hyperbolic secant function. This function’s form is changed gradually and implicitly by the proposed method. The inputs of the function are the weights of the layer, and its outputs multiply by the same weights to updating them. In this study, the proposed method is inspired by the Deep Deterministic Policy Gradient model because the actions of the Agent are into a continuous domain. To show the proposed method’s effectiveness, the classification task is considered using Convolutional Neural Networks. In this study, 7 datasets have been used for evaluating the model; MNIST, Extended MNIST, small-notMNIST, Fashion-MNIST, sign language MNIST, CIFAR-10, and CIFAR-100.\n</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in computational intelligence","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43674-022-00046-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Convolutional Neural Networks are machine learning models that have proven abilities in many variants of tasks. This powerful machine learning model sometimes suffers from overfitting. This paper proposes a method based on Reinforcement Learning for addressing this problem. In this research, the parameters of a target layer in the Convolutional Neural Network take as a state for the Agent of the Reinforcement Learning section. Then the Agent gives some actions as forming parameters of a hyperbolic secant function. This function’s form is changed gradually and implicitly by the proposed method. The inputs of the function are the weights of the layer, and its outputs multiply by the same weights to updating them. In this study, the proposed method is inspired by the Deep Deterministic Policy Gradient model because the actions of the Agent are into a continuous domain. To show the proposed method’s effectiveness, the classification task is considered using Convolutional Neural Networks. In this study, 7 datasets have been used for evaluating the model; MNIST, Extended MNIST, small-notMNIST, Fashion-MNIST, sign language MNIST, CIFAR-10, and CIFAR-100.

一种防止卷积神经网络过拟合的有效强化学习方法
卷积神经网络是一种机器学习模型,已被证明在许多任务变体中具有能力。这种强大的机器学习模型有时会受到过拟合的影响。针对这一问题,本文提出了一种基于强化学习的方法。在本研究中,卷积神经网络中目标层的参数作为强化学习部分的Agent的状态。然后,Agent给出了双曲割线函数的一些形成参数。通过所提出的方法,该函数的形式逐渐而隐含地发生了变化。函数的输入是层的权重,其输出乘以相同的权重以更新它们。在本研究中,所提出的方法受到了深度确定性策略梯度模型的启发,因为Agent的行为进入了一个连续的域。为了证明所提出的方法的有效性,使用卷积神经网络来考虑分类任务。在本研究中,使用了7个数据集来评估模型;MNIST、Extended MNIST,small notMNIST和Fashion MNIST。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信