超参数自动调优的好处:无人机计算机温度预测相关实验

Renato S. Maximiano, Valdivino Alexandre de Santiago Júnior, E. H. Shiguemori
{"title":"超参数自动调优的好处:无人机计算机温度预测相关实验","authors":"Renato S. Maximiano, Valdivino Alexandre de Santiago Júnior, E. H. Shiguemori","doi":"10.5753/eniac.2022.227276","DOIUrl":null,"url":null,"abstract":"Finding the best configuration of a neural network to solve a problem has been challenging given the numerous possibilities of values of the hyper-parameters. Thus, tuning of hyper-parameters is one important approach and researchers suggest doing this automatically. However, it is important to verify when it is suitable to perform automated tuning which is usually very costly financially and also in terms of hardware infrastructure. In this study, we analyze the advantages of using a hyper-parameter optimization framework as a way of optimizing the automated search for hyper-parameters of a neural network. To achieve this goal, we used data from an experiment related to temperature prediction of computers embedded in unmanned aerial vehicles (UAVs), and the models Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) to perform these predictions. In addition, we compare the hyper-parameter optimization framework to the hyper-parameter exhaustive search technique varying the size of the training dataset. Results of our experiment shows that designing a model using a hyper-parameter optimizer can be up to 36.02% better than using exhaustive search, in addition to achieving satisfactory results with a reduced dataset.","PeriodicalId":165095,"journal":{"name":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the benefits of automated tuning of hyper-parameters: an experiment related to temperature prediction on UAV computers\",\"authors\":\"Renato S. Maximiano, Valdivino Alexandre de Santiago Júnior, E. H. Shiguemori\",\"doi\":\"10.5753/eniac.2022.227276\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Finding the best configuration of a neural network to solve a problem has been challenging given the numerous possibilities of values of the hyper-parameters. Thus, tuning of hyper-parameters is one important approach and researchers suggest doing this automatically. However, it is important to verify when it is suitable to perform automated tuning which is usually very costly financially and also in terms of hardware infrastructure. In this study, we analyze the advantages of using a hyper-parameter optimization framework as a way of optimizing the automated search for hyper-parameters of a neural network. To achieve this goal, we used data from an experiment related to temperature prediction of computers embedded in unmanned aerial vehicles (UAVs), and the models Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) to perform these predictions. In addition, we compare the hyper-parameter optimization framework to the hyper-parameter exhaustive search technique varying the size of the training dataset. Results of our experiment shows that designing a model using a hyper-parameter optimizer can be up to 36.02% better than using exhaustive search, in addition to achieving satisfactory results with a reduced dataset.\",\"PeriodicalId\":165095,\"journal\":{\"name\":\"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5753/eniac.2022.227276\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Anais do XIX Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2022)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5753/eniac.2022.227276","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

考虑到超参数值的多种可能性,寻找神经网络的最佳配置来解决问题一直是一项挑战。因此,调优超参数是一种重要的方法,研究人员建议自动进行调优。然而,重要的是要验证何时适合执行自动调优,这通常在财务上和硬件基础设施方面都非常昂贵。在本研究中,我们分析了使用超参数优化框架作为优化神经网络超参数自动搜索的一种方式的优势。为了实现这一目标,我们使用了与无人机(uav)嵌入式计算机温度预测相关的实验数据,并使用了长短期记忆(LSTM)和门控循环单元(GRU)模型来进行这些预测。此外,我们将超参数优化框架与改变训练数据集大小的超参数穷举搜索技术进行了比较。实验结果表明,使用超参数优化器设计模型的效率比使用穷举搜索提高36.02%,并且在简化的数据集上取得了令人满意的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
On the benefits of automated tuning of hyper-parameters: an experiment related to temperature prediction on UAV computers
Finding the best configuration of a neural network to solve a problem has been challenging given the numerous possibilities of values of the hyper-parameters. Thus, tuning of hyper-parameters is one important approach and researchers suggest doing this automatically. However, it is important to verify when it is suitable to perform automated tuning which is usually very costly financially and also in terms of hardware infrastructure. In this study, we analyze the advantages of using a hyper-parameter optimization framework as a way of optimizing the automated search for hyper-parameters of a neural network. To achieve this goal, we used data from an experiment related to temperature prediction of computers embedded in unmanned aerial vehicles (UAVs), and the models Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) to perform these predictions. In addition, we compare the hyper-parameter optimization framework to the hyper-parameter exhaustive search technique varying the size of the training dataset. Results of our experiment shows that designing a model using a hyper-parameter optimizer can be up to 36.02% better than using exhaustive search, in addition to achieving satisfactory results with a reduced dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信