用局部搜索训练神经网络的超参数检验

Ahmed Aly, G. Guadagni, J. Dugan
{"title":"用局部搜索训练神经网络的超参数检验","authors":"Ahmed Aly, G. Guadagni, J. Dugan","doi":"10.1109/ICICIS46948.2019.9014658","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) have been found useful for many applications. However, training and designing those networks can be challenging and is considered more of an art or an engineering process than rigorous science. In this regard, the important process of choosing hyperparameters is relevant. In addition, training neural networks with derivative-free methods is somewhat understudied. Particularly, with regards to hyperparameter selection. The paper presents a small-scale study of 3 hyperparameters choice for convolutional neural networks (CNNs). The networks were trained with two single-candidate optimization algorithms: Stochastic Gradient Descent (derivative-based) and Local Search (derivative-free). The CNN is trained on a subset of the FashionMNIST dataset. Experimental results show that hyperparameter selection can be detrimental for Local Search, especially regarding network parametrization. Moreover, the best hyperparameter choices didn't match for both algorithms. Future investigation into the training dynamics of Local Search is likely needed.","PeriodicalId":200604,"journal":{"name":"2019 Ninth International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Examining Hyperparameters of Neural Networks Trained Using Local Search\",\"authors\":\"Ahmed Aly, G. Guadagni, J. Dugan\",\"doi\":\"10.1109/ICICIS46948.2019.9014658\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks (DNNs) have been found useful for many applications. However, training and designing those networks can be challenging and is considered more of an art or an engineering process than rigorous science. In this regard, the important process of choosing hyperparameters is relevant. In addition, training neural networks with derivative-free methods is somewhat understudied. Particularly, with regards to hyperparameter selection. The paper presents a small-scale study of 3 hyperparameters choice for convolutional neural networks (CNNs). The networks were trained with two single-candidate optimization algorithms: Stochastic Gradient Descent (derivative-based) and Local Search (derivative-free). The CNN is trained on a subset of the FashionMNIST dataset. Experimental results show that hyperparameter selection can be detrimental for Local Search, especially regarding network parametrization. Moreover, the best hyperparameter choices didn't match for both algorithms. Future investigation into the training dynamics of Local Search is likely needed.\",\"PeriodicalId\":200604,\"journal\":{\"name\":\"2019 Ninth International Conference on Intelligent Computing and Information Systems (ICICIS)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Ninth International Conference on Intelligent Computing and Information Systems (ICICIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICICIS46948.2019.9014658\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Ninth International Conference on Intelligent Computing and Information Systems (ICICIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICIS46948.2019.9014658","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度神经网络(dnn)在许多应用中都很有用。然而,训练和设计这些网络可能具有挑战性,更像是一门艺术或工程过程,而不是严谨的科学。在这方面,选择超参数的重要过程是相关的。此外,用无导数方法训练神经网络的研究还不够充分。特别是在超参数选择方面。本文对卷积神经网络(cnn)的3个超参数选择进行了小规模研究。该网络使用两种单候选优化算法进行训练:随机梯度下降(基于导数)和局部搜索(无导数)。CNN是在FashionMNIST数据集的一个子集上训练的。实验结果表明,超参数选择不利于局部搜索,特别是在网络参数化方面。此外,两种算法的最佳超参数选择并不匹配。未来可能需要对局部搜索的训练动态进行调查。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Examining Hyperparameters of Neural Networks Trained Using Local Search
Deep neural networks (DNNs) have been found useful for many applications. However, training and designing those networks can be challenging and is considered more of an art or an engineering process than rigorous science. In this regard, the important process of choosing hyperparameters is relevant. In addition, training neural networks with derivative-free methods is somewhat understudied. Particularly, with regards to hyperparameter selection. The paper presents a small-scale study of 3 hyperparameters choice for convolutional neural networks (CNNs). The networks were trained with two single-candidate optimization algorithms: Stochastic Gradient Descent (derivative-based) and Local Search (derivative-free). The CNN is trained on a subset of the FashionMNIST dataset. Experimental results show that hyperparameter selection can be detrimental for Local Search, especially regarding network parametrization. Moreover, the best hyperparameter choices didn't match for both algorithms. Future investigation into the training dynamics of Local Search is likely needed.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信