微调自适应随机优化器:通过梯度幅度直方图分析确定最佳超参数 $$epsilon$

Gustavo Silva, Paul Rodriguez
{"title":"微调自适应随机优化器:通过梯度幅度直方图分析确定最佳超参数 $$epsilon$","authors":"Gustavo Silva, Paul Rodriguez","doi":"10.1007/s00521-024-10302-2","DOIUrl":null,"url":null,"abstract":"<p>Stochastic optimizers play a crucial role in the successful training of deep neural network models. To achieve optimal model performance, designers must carefully select both model and optimizer hyperparameters. However, this process is frequently demanding in terms of computational resources and processing time. While it is a well-established practice to tune the entire set of optimizer hyperparameters for peak performance, there is still a lack of clarity regarding the individual influence of hyperparameters mislabeled as “low priority”, including the safeguard factor <span>\\(\\epsilon\\)</span> and decay rate <span>\\(\\beta\\)</span>, in leading adaptive stochastic optimizers like the Adam optimizer. In this manuscript, we introduce a new framework based on the empirical probability density function of the loss’ gradient magnitude, termed as the “gradient magnitude histogram”, for a thorough analysis of adaptive stochastic optimizers and the safeguard hyperparameter <span>\\(\\epsilon\\)</span>. This framework reveals and justifies valuable relationships and dependencies among hyperparameters in connection to optimal performance across diverse tasks, such as classification, language modeling and machine translation. Furthermore, we propose a novel algorithm using gradient magnitude histograms to automatically estimate a refined and accurate search space for the optimal safeguard hyperparameter <span>\\(\\epsilon\\)</span>, surpassing the conventional trial-and-error methodology by establishing a worst-case search space that is two times narrower.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fine-tuning adaptive stochastic optimizers: determining the optimal hyperparameter $$\\\\epsilon$$ via gradient magnitude histogram analysis\",\"authors\":\"Gustavo Silva, Paul Rodriguez\",\"doi\":\"10.1007/s00521-024-10302-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Stochastic optimizers play a crucial role in the successful training of deep neural network models. To achieve optimal model performance, designers must carefully select both model and optimizer hyperparameters. However, this process is frequently demanding in terms of computational resources and processing time. While it is a well-established practice to tune the entire set of optimizer hyperparameters for peak performance, there is still a lack of clarity regarding the individual influence of hyperparameters mislabeled as “low priority”, including the safeguard factor <span>\\\\(\\\\epsilon\\\\)</span> and decay rate <span>\\\\(\\\\beta\\\\)</span>, in leading adaptive stochastic optimizers like the Adam optimizer. In this manuscript, we introduce a new framework based on the empirical probability density function of the loss’ gradient magnitude, termed as the “gradient magnitude histogram”, for a thorough analysis of adaptive stochastic optimizers and the safeguard hyperparameter <span>\\\\(\\\\epsilon\\\\)</span>. This framework reveals and justifies valuable relationships and dependencies among hyperparameters in connection to optimal performance across diverse tasks, such as classification, language modeling and machine translation. Furthermore, we propose a novel algorithm using gradient magnitude histograms to automatically estimate a refined and accurate search space for the optimal safeguard hyperparameter <span>\\\\(\\\\epsilon\\\\)</span>, surpassing the conventional trial-and-error methodology by establishing a worst-case search space that is two times narrower.</p>\",\"PeriodicalId\":18925,\"journal\":{\"name\":\"Neural Computing and Applications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Computing and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00521-024-10302-2\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computing and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00521-024-10302-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随机优化器在深度神经网络模型的成功训练中起着至关重要的作用。为了实现最佳模型性能,设计者必须仔细选择模型和优化器的超参数。然而,这一过程往往需要大量的计算资源和处理时间。虽然调整整套优化器超参数以达到最佳性能是一种行之有效的做法,但在亚当优化器等领先的自适应随机优化器中,被误标为 "低优先级 "的超参数(包括保障系数和衰减率)的个别影响仍然不够明确。在本手稿中,我们引入了一个基于损失梯度大小的经验概率密度函数的新框架,称为 "梯度大小直方图",用于全面分析自适应随机优化器和保障超参数(\epsilon)。这一框架揭示并证明了超参数之间有价值的关系和依赖性,这些关系和依赖性与分类、语言建模和机器翻译等不同任务的最佳性能有关。此外,我们还提出了一种新颖的算法,利用梯度幅度直方图来自动估算最优保障超参数(\epsilon/\)的精炼而精确的搜索空间,超越了传统的试错方法,建立了一个最坏情况下比传统方法窄两倍的搜索空间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Fine-tuning adaptive stochastic optimizers: determining the optimal hyperparameter $$\epsilon$$ via gradient magnitude histogram analysis

Fine-tuning adaptive stochastic optimizers: determining the optimal hyperparameter $$\epsilon$$ via gradient magnitude histogram analysis

Stochastic optimizers play a crucial role in the successful training of deep neural network models. To achieve optimal model performance, designers must carefully select both model and optimizer hyperparameters. However, this process is frequently demanding in terms of computational resources and processing time. While it is a well-established practice to tune the entire set of optimizer hyperparameters for peak performance, there is still a lack of clarity regarding the individual influence of hyperparameters mislabeled as “low priority”, including the safeguard factor \(\epsilon\) and decay rate \(\beta\), in leading adaptive stochastic optimizers like the Adam optimizer. In this manuscript, we introduce a new framework based on the empirical probability density function of the loss’ gradient magnitude, termed as the “gradient magnitude histogram”, for a thorough analysis of adaptive stochastic optimizers and the safeguard hyperparameter \(\epsilon\). This framework reveals and justifies valuable relationships and dependencies among hyperparameters in connection to optimal performance across diverse tasks, such as classification, language modeling and machine translation. Furthermore, we propose a novel algorithm using gradient magnitude histograms to automatically estimate a refined and accurate search space for the optimal safeguard hyperparameter \(\epsilon\), surpassing the conventional trial-and-error methodology by establishing a worst-case search space that is two times narrower.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信