基于深度学习实验的超参数优化研究进展

Rohan Bhattacharjee, Debjyoti Ghosh, Abhirup Mazumder
{"title":"基于深度学习实验的超参数优化研究进展","authors":"Rohan Bhattacharjee, Debjyoti Ghosh, Abhirup Mazumder","doi":"10.15864/jmscm.2407","DOIUrl":null,"url":null,"abstract":"It has been found that during the runtime of a deep learning experiment, the intermediate resultant values get removed while the processes carry forward. This removal of data forces the interim experiment to roll back to a certain initial point after which the hyper-parameters or results\n become difficult to obtain (mostly for a vast set of experimental data). Hyper-parameters are the various constraints/measures that a learning model requires to generalise distinct data patterns and control the learning process. A proper choice and optimization of these hyper-parameters must\n be made so that the learning model is capable of resolving the given machine learning problem and during training, a specific performance objective for an algorithm on a dataset is optimised. This review paper aims at presenting a Parameter Optimisation for Learning (POL) model highlighting\n the all-round features of a deep learning experiment via an application-based programming interface (API). This provides the means of stocking, recovering and examining parameters settings and intermediate values. To ease the process of optimisation of hyper-parameters further, the model involves\n the application of optimisation functions, analysis and data management. Moreover, the prescribed model boasts of a higher interactive aspect and is circulating across a number of machine learning experts, aiding further utility in data management.","PeriodicalId":270881,"journal":{"name":"Journal of Mathematical Sciences & Computational Mathematics","volume":"191 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A REVIEW ON HYPER-PARAMETER OPTIMISATION BY DEEP LEARNING EXPERIMENTS\",\"authors\":\"Rohan Bhattacharjee, Debjyoti Ghosh, Abhirup Mazumder\",\"doi\":\"10.15864/jmscm.2407\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It has been found that during the runtime of a deep learning experiment, the intermediate resultant values get removed while the processes carry forward. This removal of data forces the interim experiment to roll back to a certain initial point after which the hyper-parameters or results\\n become difficult to obtain (mostly for a vast set of experimental data). Hyper-parameters are the various constraints/measures that a learning model requires to generalise distinct data patterns and control the learning process. A proper choice and optimization of these hyper-parameters must\\n be made so that the learning model is capable of resolving the given machine learning problem and during training, a specific performance objective for an algorithm on a dataset is optimised. This review paper aims at presenting a Parameter Optimisation for Learning (POL) model highlighting\\n the all-round features of a deep learning experiment via an application-based programming interface (API). This provides the means of stocking, recovering and examining parameters settings and intermediate values. To ease the process of optimisation of hyper-parameters further, the model involves\\n the application of optimisation functions, analysis and data management. Moreover, the prescribed model boasts of a higher interactive aspect and is circulating across a number of machine learning experts, aiding further utility in data management.\",\"PeriodicalId\":270881,\"journal\":{\"name\":\"Journal of Mathematical Sciences & Computational Mathematics\",\"volume\":\"191 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Mathematical Sciences & Computational Mathematics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.15864/jmscm.2407\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Mathematical Sciences & Computational Mathematics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15864/jmscm.2407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

研究发现,在深度学习实验的运行过程中,中间结果值被去除,而过程继续进行。这种数据的移除迫使临时实验回滚到某个初始点,在此之后,超参数或结果变得难以获得(主要是对于大量的实验数据集)。超参数是学习模型泛化不同数据模式和控制学习过程所需的各种约束/度量。必须对这些超参数进行适当的选择和优化,以便学习模型能够解决给定的机器学习问题,并且在训练期间,优化数据集上算法的特定性能目标。这篇综述论文旨在通过基于应用程序的编程接口(API)提出一个学习参数优化(POL)模型,突出了深度学习实验的全方位特征。这提供了储存、恢复和检查参数设置和中间值的手段。为了进一步简化超参数的优化过程,该模型涉及优化函数、分析和数据管理的应用。此外,规定的模型具有更高的交互性,并在许多机器学习专家之间传播,有助于进一步应用于数据管理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A REVIEW ON HYPER-PARAMETER OPTIMISATION BY DEEP LEARNING EXPERIMENTS
It has been found that during the runtime of a deep learning experiment, the intermediate resultant values get removed while the processes carry forward. This removal of data forces the interim experiment to roll back to a certain initial point after which the hyper-parameters or results become difficult to obtain (mostly for a vast set of experimental data). Hyper-parameters are the various constraints/measures that a learning model requires to generalise distinct data patterns and control the learning process. A proper choice and optimization of these hyper-parameters must be made so that the learning model is capable of resolving the given machine learning problem and during training, a specific performance objective for an algorithm on a dataset is optimised. This review paper aims at presenting a Parameter Optimisation for Learning (POL) model highlighting the all-round features of a deep learning experiment via an application-based programming interface (API). This provides the means of stocking, recovering and examining parameters settings and intermediate values. To ease the process of optimisation of hyper-parameters further, the model involves the application of optimisation functions, analysis and data management. Moreover, the prescribed model boasts of a higher interactive aspect and is circulating across a number of machine learning experts, aiding further utility in data management.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信