G. Krishna Sriharsha, D. Lakshmi Padmaja, G. R. Ramana Rao, G. Surya Deepa
{"title":"A Modified Approach of Hyper-parameter Optimization to Assess The Classifier Performance","authors":"G. Krishna Sriharsha, D. Lakshmi Padmaja, G. R. Ramana Rao, G. Surya Deepa","doi":"10.1109/PuneCon55413.2022.10014931","DOIUrl":null,"url":null,"abstract":"Modern algorithms are remarkably adept at identifying data that is too large or complex for humans to comprehend. It has become difficult to identify the list of hyperparameters that deliver an improvement in performance for a given geometry of the data set. This has shifted the emphasis from processing data (model improvement) to the hyper parameters (tuning) of the classifier. Since hyper parameters are set to default values for a generic case, they need not be specially tuned to the given classification task. The purpose of this paper is to demonstrate a strategy that avoids unnecessary tuning attempts and shows the best performance for various classifiers on various shapes of geometry. The findings of this experiment will assist the user in determining whether hyper parameter tuning activities is worth the time and computational resources.","PeriodicalId":258640,"journal":{"name":"2022 IEEE Pune Section International Conference (PuneCon)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Pune Section International Conference (PuneCon)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PuneCon55413.2022.10014931","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Modern algorithms are remarkably adept at identifying data that is too large or complex for humans to comprehend. It has become difficult to identify the list of hyperparameters that deliver an improvement in performance for a given geometry of the data set. This has shifted the emphasis from processing data (model improvement) to the hyper parameters (tuning) of the classifier. Since hyper parameters are set to default values for a generic case, they need not be specially tuned to the given classification task. The purpose of this paper is to demonstrate a strategy that avoids unnecessary tuning attempts and shows the best performance for various classifiers on various shapes of geometry. The findings of this experiment will assist the user in determining whether hyper parameter tuning activities is worth the time and computational resources.