扩展了超梯度下降技术,以减少超参数优化算法的最优解时间

IF 1.6 3区 工程技术 Q4 ENGINEERING, INDUSTRIAL
F. Seifi, S. T. A. Niaki
{"title":"扩展了超梯度下降技术,以减少超参数优化算法的最优解时间","authors":"F. Seifi, S. T. A. Niaki","doi":"10.5267/j.ijiec.2023.4.004","DOIUrl":null,"url":null,"abstract":"There have been many applications for machine learning algorithms in different fields. The importance of hyperparameters for machine learning algorithms is their control over the behaviors of training algorithms and their crucial impact on the performance of machine learning models. Tuning hyperparameters crucially affects the performance of machine learning algorithms, and future advances in this area mainly depend on well-tuned hyperparameters. Nevertheless, the high computational cost involved in evaluating the algorithms in large datasets or complicated models is a significant limitation that causes inefficiency of the tuning process. Besides, increased online applications of machine learning approaches have led to the requirement of producing good answers in less time. The present study first presents a novel classification of hyperparameter types based on their types to create high-quality solutions quickly. Then, based on this classification and using the hypergradient technique, some hyperparameters of deep learning algorithms are adjusted during the training process to decrease the search space and discover the optimal values of the hyperparameters. This method just needs only the parameters of the previous two steps and the gradient of the previous step. Finally, the proposed method is combined with other techniques in hyperparameter optimization, and the results are reviewed in two case studies. As confirmed by experimental results, the performance of the algorithms with the proposed method have been increased 36.62% and 23.16% (based on the best average accuracy) for Cifar10 and Cifar100 dataset respectively in early stages while the final produced answers with this method are equal to or better than the algorithms without it. Therefore, this method can be combined with hyperparameter optimization algorithms in order to improve their performance and make them more appropriate for online use by just using the parameters of the previous two steps and the gradient of the previous step.","PeriodicalId":51356,"journal":{"name":"International Journal of Industrial Engineering Computations","volume":"os-40 1","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Extending the hypergradient descent technique to reduce the time of optimal solution achieved in hyperparameter optimization algorithms\",\"authors\":\"F. Seifi, S. T. A. Niaki\",\"doi\":\"10.5267/j.ijiec.2023.4.004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There have been many applications for machine learning algorithms in different fields. The importance of hyperparameters for machine learning algorithms is their control over the behaviors of training algorithms and their crucial impact on the performance of machine learning models. Tuning hyperparameters crucially affects the performance of machine learning algorithms, and future advances in this area mainly depend on well-tuned hyperparameters. Nevertheless, the high computational cost involved in evaluating the algorithms in large datasets or complicated models is a significant limitation that causes inefficiency of the tuning process. Besides, increased online applications of machine learning approaches have led to the requirement of producing good answers in less time. The present study first presents a novel classification of hyperparameter types based on their types to create high-quality solutions quickly. Then, based on this classification and using the hypergradient technique, some hyperparameters of deep learning algorithms are adjusted during the training process to decrease the search space and discover the optimal values of the hyperparameters. This method just needs only the parameters of the previous two steps and the gradient of the previous step. Finally, the proposed method is combined with other techniques in hyperparameter optimization, and the results are reviewed in two case studies. As confirmed by experimental results, the performance of the algorithms with the proposed method have been increased 36.62% and 23.16% (based on the best average accuracy) for Cifar10 and Cifar100 dataset respectively in early stages while the final produced answers with this method are equal to or better than the algorithms without it. Therefore, this method can be combined with hyperparameter optimization algorithms in order to improve their performance and make them more appropriate for online use by just using the parameters of the previous two steps and the gradient of the previous step.\",\"PeriodicalId\":51356,\"journal\":{\"name\":\"International Journal of Industrial Engineering Computations\",\"volume\":\"os-40 1\",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Industrial Engineering Computations\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.5267/j.ijiec.2023.4.004\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, INDUSTRIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Industrial Engineering Computations","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.5267/j.ijiec.2023.4.004","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0

摘要

机器学习算法在不同的领域有很多应用。超参数对于机器学习算法的重要性在于它们对训练算法行为的控制以及它们对机器学习模型性能的关键影响。超参数的调优对机器学习算法的性能有着至关重要的影响,该领域的未来发展主要依赖于超参数的调优。然而,在大型数据集或复杂模型中评估算法所涉及的高计算成本是导致调优过程效率低下的一个重要限制。此外,越来越多的机器学习方法的在线应用导致了在更短的时间内产生好的答案的要求。本研究首先提出了一种基于超参数类型的新分类方法,以快速生成高质量的解。然后,在此分类的基础上,利用超梯度技术,在训练过程中调整深度学习算法的一些超参数,以减小搜索空间,发现超参数的最优值。该方法只需要前两步的参数和前一步的梯度。最后,将该方法与其他超参数优化技术相结合,并通过两个实例对结果进行了回顾。实验结果证实,采用该方法的算法在Cifar10和Cifar100数据集的早期性能分别提高了36.62%和23.16%(基于最佳平均准确率),最终生成的答案等于或优于未使用该方法的算法。因此,该方法可以与超参数优化算法相结合,仅利用前两步的参数和前一步的梯度,就可以提高超参数优化算法的性能,使其更适合在线使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Extending the hypergradient descent technique to reduce the time of optimal solution achieved in hyperparameter optimization algorithms
There have been many applications for machine learning algorithms in different fields. The importance of hyperparameters for machine learning algorithms is their control over the behaviors of training algorithms and their crucial impact on the performance of machine learning models. Tuning hyperparameters crucially affects the performance of machine learning algorithms, and future advances in this area mainly depend on well-tuned hyperparameters. Nevertheless, the high computational cost involved in evaluating the algorithms in large datasets or complicated models is a significant limitation that causes inefficiency of the tuning process. Besides, increased online applications of machine learning approaches have led to the requirement of producing good answers in less time. The present study first presents a novel classification of hyperparameter types based on their types to create high-quality solutions quickly. Then, based on this classification and using the hypergradient technique, some hyperparameters of deep learning algorithms are adjusted during the training process to decrease the search space and discover the optimal values of the hyperparameters. This method just needs only the parameters of the previous two steps and the gradient of the previous step. Finally, the proposed method is combined with other techniques in hyperparameter optimization, and the results are reviewed in two case studies. As confirmed by experimental results, the performance of the algorithms with the proposed method have been increased 36.62% and 23.16% (based on the best average accuracy) for Cifar10 and Cifar100 dataset respectively in early stages while the final produced answers with this method are equal to or better than the algorithms without it. Therefore, this method can be combined with hyperparameter optimization algorithms in order to improve their performance and make them more appropriate for online use by just using the parameters of the previous two steps and the gradient of the previous step.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
9.10%
发文量
35
审稿时长
20 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信