Architecture Optimization Model for the Deep Neural Network

K. Ukaoha, E. C. Igodan
{"title":"Architecture Optimization Model for the Deep Neural Network","authors":"K. Ukaoha, E. C. Igodan","doi":"10.21608/ijicis.2019.96101","DOIUrl":null,"url":null,"abstract":"The daunting and challenging tasks of specifying the optimal network architecture and its parameters are still a major area of research in the field of Machine Learning (ML) till date. These tasks though determine the success of building and training an effective and accurate model, are yet to be considered on a deep network having three hidden layers with varying optimized parameters to the best of our knowledge. This is due to expert’s opinion that it is practically difficult to determine a good Multilayer Perceptron (MLP) topology with more than two or three hidden layers without considering the number of samples and complexity of the classification to be learnt. In this study, a novel approach that combines an evolutionary genetic algorithm and an optimization algorithm and a supervised deep neural network (Deep-NN) using alternative activation functions with the view of modeling the prediction for the admission of prospective university students. The genetic algorithm is used to select optimal network parameters for the Deep-NN. Thus, this study presents a novel methodology that is effective, automatic and less human-dependent in finding optimal solution to diverse binary classification benchmarks. The model is trained, validated and tested using various performance metrics to measure the generalization ability and its performance.","PeriodicalId":244591,"journal":{"name":"International Journal of Intelligent Computing and Information Sciences","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Computing and Information Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21608/ijicis.2019.96101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

The daunting and challenging tasks of specifying the optimal network architecture and its parameters are still a major area of research in the field of Machine Learning (ML) till date. These tasks though determine the success of building and training an effective and accurate model, are yet to be considered on a deep network having three hidden layers with varying optimized parameters to the best of our knowledge. This is due to expert’s opinion that it is practically difficult to determine a good Multilayer Perceptron (MLP) topology with more than two or three hidden layers without considering the number of samples and complexity of the classification to be learnt. In this study, a novel approach that combines an evolutionary genetic algorithm and an optimization algorithm and a supervised deep neural network (Deep-NN) using alternative activation functions with the view of modeling the prediction for the admission of prospective university students. The genetic algorithm is used to select optimal network parameters for the Deep-NN. Thus, this study presents a novel methodology that is effective, automatic and less human-dependent in finding optimal solution to diverse binary classification benchmarks. The model is trained, validated and tested using various performance metrics to measure the generalization ability and its performance.
深度神经网络的体系结构优化模型
迄今为止,确定最优网络架构及其参数的艰巨而具有挑战性的任务仍然是机器学习(ML)领域的一个主要研究领域。这些任务虽然决定了建立和训练一个有效和准确的模型的成功,但在一个有三个隐藏层的深度网络上,这些隐藏层具有我们所知的不同的优化参数,还有待考虑。这是因为专家认为,如果不考虑样本的数量和要学习的分类的复杂性,实际上很难确定具有超过两层或三层隐藏层的多层感知器(MLP)拓扑。在这项研究中,一种结合了进化遗传算法和优化算法以及使用替代激活函数的监督深度神经网络(deep - nn)的新方法,以期对未来大学生的录取进行建模预测。采用遗传算法选择深度神经网络的最优网络参数。因此,本研究提出了一种新颖的方法,该方法在寻找各种二元分类基准的最优解时有效,自动且较少依赖于人类。使用各种性能指标对模型进行训练、验证和测试,以衡量泛化能力及其性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信