一种改进的超参数优化方法评估分类器性能

G. Krishna Sriharsha, D. Lakshmi Padmaja, G. R. Ramana Rao, G. Surya Deepa
{"title":"一种改进的超参数优化方法评估分类器性能","authors":"G. Krishna Sriharsha, D. Lakshmi Padmaja, G. R. Ramana Rao, G. Surya Deepa","doi":"10.1109/PuneCon55413.2022.10014931","DOIUrl":null,"url":null,"abstract":"Modern algorithms are remarkably adept at identifying data that is too large or complex for humans to comprehend. It has become difficult to identify the list of hyperparameters that deliver an improvement in performance for a given geometry of the data set. This has shifted the emphasis from processing data (model improvement) to the hyper parameters (tuning) of the classifier. Since hyper parameters are set to default values for a generic case, they need not be specially tuned to the given classification task. The purpose of this paper is to demonstrate a strategy that avoids unnecessary tuning attempts and shows the best performance for various classifiers on various shapes of geometry. The findings of this experiment will assist the user in determining whether hyper parameter tuning activities is worth the time and computational resources.","PeriodicalId":258640,"journal":{"name":"2022 IEEE Pune Section International Conference (PuneCon)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Modified Approach of Hyper-parameter Optimization to Assess The Classifier Performance\",\"authors\":\"G. Krishna Sriharsha, D. Lakshmi Padmaja, G. R. Ramana Rao, G. Surya Deepa\",\"doi\":\"10.1109/PuneCon55413.2022.10014931\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Modern algorithms are remarkably adept at identifying data that is too large or complex for humans to comprehend. It has become difficult to identify the list of hyperparameters that deliver an improvement in performance for a given geometry of the data set. This has shifted the emphasis from processing data (model improvement) to the hyper parameters (tuning) of the classifier. Since hyper parameters are set to default values for a generic case, they need not be specially tuned to the given classification task. The purpose of this paper is to demonstrate a strategy that avoids unnecessary tuning attempts and shows the best performance for various classifiers on various shapes of geometry. The findings of this experiment will assist the user in determining whether hyper parameter tuning activities is worth the time and computational resources.\",\"PeriodicalId\":258640,\"journal\":{\"name\":\"2022 IEEE Pune Section International Conference (PuneCon)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Pune Section International Conference (PuneCon)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PuneCon55413.2022.10014931\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Pune Section International Conference (PuneCon)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PuneCon55413.2022.10014931","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

现代算法非常擅长识别人类无法理解的过于庞大或复杂的数据。对于给定的数据集几何形状,很难确定能够提高性能的超参数列表。这将重点从处理数据(模型改进)转移到分类器的超参数(调优)。由于超参数被设置为一般情况下的默认值,因此不需要针对给定的分类任务对它们进行特别调优。本文的目的是演示一种策略,该策略可以避免不必要的调优尝试,并显示各种分类器在各种几何形状上的最佳性能。本实验的结果将帮助用户确定超参数调优活动是否值得花费时间和计算资源。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Modified Approach of Hyper-parameter Optimization to Assess The Classifier Performance
Modern algorithms are remarkably adept at identifying data that is too large or complex for humans to comprehend. It has become difficult to identify the list of hyperparameters that deliver an improvement in performance for a given geometry of the data set. This has shifted the emphasis from processing data (model improvement) to the hyper parameters (tuning) of the classifier. Since hyper parameters are set to default values for a generic case, they need not be specially tuned to the given classification task. The purpose of this paper is to demonstrate a strategy that avoids unnecessary tuning attempts and shows the best performance for various classifiers on various shapes of geometry. The findings of this experiment will assist the user in determining whether hyper parameter tuning activities is worth the time and computational resources.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信