递归神经网络文本分类优化器的比较分析

Syed Mahbubuz Zaman, M. Hasan, Redwan Islam Sakline, Dipto Das, Md. Ashraful Alam
{"title":"递归神经网络文本分类优化器的比较分析","authors":"Syed Mahbubuz Zaman, M. Hasan, Redwan Islam Sakline, Dipto Das, Md. Ashraful Alam","doi":"10.1109/CSDE53843.2021.9718394","DOIUrl":null,"url":null,"abstract":"The performance of any deep learning model depends heavily on the choice of optimizers and their corresponding hyper-parameters. For any given problem researchers struggle to select the best possible optimizer from a myriad of optimizers proposed in existing literature. Currently the process of optimizer selection in practice is anecdotal at best whereby practitioners either randomly select an optimizer or rely on best practices or online recommendations not grounded on empirical evidence base. In our paper, we delve deep into this problem of picking the right optimizer for text based datasets and linguistic classification problems, by bench-marking ten optimizers on three different RNN models (Bi-GRU, Bi-LSTM and BRNN) on three spam email based benchmark datasets. We analyse the performance of models employing these optimizers using train accuracy, train loss, validation accuracy, validation loss, test accuracy, test loss and RO-AUC score as metrics. The results show that Adaptive Optimization methods (RMSprop, Adam, Adam weight decay and Nadam) with default hyper-parameters outperform other optimizers in all three datasets and RNN model variations.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A Comparative Analysis of Optimizers in Recurrent Neural Networks for Text Classification\",\"authors\":\"Syed Mahbubuz Zaman, M. Hasan, Redwan Islam Sakline, Dipto Das, Md. Ashraful Alam\",\"doi\":\"10.1109/CSDE53843.2021.9718394\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The performance of any deep learning model depends heavily on the choice of optimizers and their corresponding hyper-parameters. For any given problem researchers struggle to select the best possible optimizer from a myriad of optimizers proposed in existing literature. Currently the process of optimizer selection in practice is anecdotal at best whereby practitioners either randomly select an optimizer or rely on best practices or online recommendations not grounded on empirical evidence base. In our paper, we delve deep into this problem of picking the right optimizer for text based datasets and linguistic classification problems, by bench-marking ten optimizers on three different RNN models (Bi-GRU, Bi-LSTM and BRNN) on three spam email based benchmark datasets. We analyse the performance of models employing these optimizers using train accuracy, train loss, validation accuracy, validation loss, test accuracy, test loss and RO-AUC score as metrics. The results show that Adaptive Optimization methods (RMSprop, Adam, Adam weight decay and Nadam) with default hyper-parameters outperform other optimizers in all three datasets and RNN model variations.\",\"PeriodicalId\":166950,\"journal\":{\"name\":\"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)\",\"volume\":\"49 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSDE53843.2021.9718394\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSDE53843.2021.9718394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

任何深度学习模型的性能在很大程度上取决于优化器及其相应超参数的选择。对于任何给定的问题,研究人员努力从现有文献中提出的无数优化器中选择可能的最佳优化器。目前在实践中选择优化器的过程充其量是轶事,从业者要么随机选择一个优化器,要么依赖于最佳实践或在线推荐,而不是基于经验证据基础。在我们的论文中,我们深入研究了为基于文本的数据集和语言分类问题选择正确的优化器的问题,通过在三个基于垃圾邮件的基准数据集上对三种不同的RNN模型(Bi-GRU, Bi-LSTM和BRNN)上的十个优化器进行基准测试。我们使用训练精度、训练损失、验证精度、验证损失、测试精度、测试损失和RO-AUC分数作为度量来分析使用这些优化器的模型的性能。结果表明,具有默认超参数的自适应优化方法(RMSprop、Adam、Adam权重衰减和Nadam)在所有三种数据集和RNN模型变化中都优于其他优化方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Comparative Analysis of Optimizers in Recurrent Neural Networks for Text Classification
The performance of any deep learning model depends heavily on the choice of optimizers and their corresponding hyper-parameters. For any given problem researchers struggle to select the best possible optimizer from a myriad of optimizers proposed in existing literature. Currently the process of optimizer selection in practice is anecdotal at best whereby practitioners either randomly select an optimizer or rely on best practices or online recommendations not grounded on empirical evidence base. In our paper, we delve deep into this problem of picking the right optimizer for text based datasets and linguistic classification problems, by bench-marking ten optimizers on three different RNN models (Bi-GRU, Bi-LSTM and BRNN) on three spam email based benchmark datasets. We analyse the performance of models employing these optimizers using train accuracy, train loss, validation accuracy, validation loss, test accuracy, test loss and RO-AUC score as metrics. The results show that Adaptive Optimization methods (RMSprop, Adam, Adam weight decay and Nadam) with default hyper-parameters outperform other optimizers in all three datasets and RNN model variations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信