S. Bischoff, M. Mendenhall, Andrew Rice, J. Vasquez
{"title":"Adapting learning parameter transition in the Generalized Learning Vector Quantization family of classifiers","authors":"S. Bischoff, M. Mendenhall, Andrew Rice, J. Vasquez","doi":"10.1109/WHISPERS.2010.5594950","DOIUrl":null,"url":null,"abstract":"Many methods of hyperspectral data classification require the adjustment of learning parameters for their success. To this end, one may fix the learning parameters, offer a functional-based parameter decay, or use a step-wise decrement of the learning parameters after a fixed number of training steps. Each of the three methods described rely on the expertise of user and do not necessarily lend themselves well to time-sensitive solutions. Classification methods based on the optimization of a cost function offer a clear advantage as this cost function can be used to adapt the learning schedule of the learning machine thus speeding convergence. We demonstrate this concept applied to variants of Sato & Yamada's Generalized Learning Vector Quantization and transition to the next set of learn rates at the appropriate time in the learning process. Experiments show that, by monitoring the stationarity of the cost function, one can automatically transition to the next learning parameter set significantly decreasing training times.","PeriodicalId":193944,"journal":{"name":"2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WHISPERS.2010.5594950","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Many methods of hyperspectral data classification require the adjustment of learning parameters for their success. To this end, one may fix the learning parameters, offer a functional-based parameter decay, or use a step-wise decrement of the learning parameters after a fixed number of training steps. Each of the three methods described rely on the expertise of user and do not necessarily lend themselves well to time-sensitive solutions. Classification methods based on the optimization of a cost function offer a clear advantage as this cost function can be used to adapt the learning schedule of the learning machine thus speeding convergence. We demonstrate this concept applied to variants of Sato & Yamada's Generalized Learning Vector Quantization and transition to the next set of learn rates at the appropriate time in the learning process. Experiments show that, by monitoring the stationarity of the cost function, one can automatically transition to the next learning parameter set significantly decreasing training times.