Hyperparameter Optimization Machines

Martin Wistuba, Nicolas Schilling, L. Schmidt-Thieme
{"title":"Hyperparameter Optimization Machines","authors":"Martin Wistuba, Nicolas Schilling, L. Schmidt-Thieme","doi":"10.1109/DSAA.2016.12","DOIUrl":null,"url":null,"abstract":"Algorithm selection and hyperparameter tuning are omnipresent problems for researchers and practitioners. Hence, it is not surprising that the efforts in automatizing this process using various meta-learning approaches have been increased. Sequential model-based optimization (SMBO) is ne of the most popular frameworks for finding optimal hyperparameter configurations. Originally designed for black-box optimization, researchers have contributed different meta-learning approaches to speed up the optimization process. We create a generalized framework of SMBO and its recent additions which gives access to adaptive hyperparameter transfer learning with simple surrogates (AHT), a new class of hyperparameter optimization strategies. AHT provides less time-overhead for the optimization process by replacing time-and space-consuming transfer surrogate models with simple surrogates that employ adaptive transfer learning. In an empirical comparison on two different meta-data sets, we can show that AHT outperforms various instances of the SMBO framework in the scenarios of hyperparameter tuning and algorithm selection.","PeriodicalId":193885,"journal":{"name":"2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"244 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSAA.2016.12","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

Algorithm selection and hyperparameter tuning are omnipresent problems for researchers and practitioners. Hence, it is not surprising that the efforts in automatizing this process using various meta-learning approaches have been increased. Sequential model-based optimization (SMBO) is ne of the most popular frameworks for finding optimal hyperparameter configurations. Originally designed for black-box optimization, researchers have contributed different meta-learning approaches to speed up the optimization process. We create a generalized framework of SMBO and its recent additions which gives access to adaptive hyperparameter transfer learning with simple surrogates (AHT), a new class of hyperparameter optimization strategies. AHT provides less time-overhead for the optimization process by replacing time-and space-consuming transfer surrogate models with simple surrogates that employ adaptive transfer learning. In an empirical comparison on two different meta-data sets, we can show that AHT outperforms various instances of the SMBO framework in the scenarios of hyperparameter tuning and algorithm selection.
超参数优化机
算法选择和超参数调优是研究人员和实践者普遍存在的问题。因此,使用各种元学习方法使这一过程自动化的努力已经增加,这并不奇怪。基于序列模型的优化(SMBO)是寻找最优超参数配置的最流行框架之一。最初设计用于黑盒优化,研究人员贡献了不同的元学习方法来加速优化过程。我们创建了一个广义的SMBO框架及其最近添加的内容,该框架提供了使用简单代理(AHT)的自适应超参数迁移学习,这是一类新的超参数优化策略。AHT通过使用使用自适应迁移学习的简单代理取代耗时和占用空间的迁移代理模型,为优化过程提供了更少的时间开销。在两个不同元数据集的经验比较中,我们可以证明AHT在超参数调优和算法选择的情况下优于SMBO框架的各种实例。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信