{"title":"用于监督机器学习的扰动调节核回归量","authors":"S. Kung, Pei-Yuan Wu","doi":"10.1109/MLSP.2012.6349743","DOIUrl":null,"url":null,"abstract":"This paper develops a kernel perturbation-regulated (KPR) regressor based on the errors-in-variables models. KPR offers a strong smoothing capability critical to the robustness of regression or classification results. For Gaussian cases, the notion of orthogonal polynomials is instrumental to optimal estimation and its error analysis. More exactly, the regressor may be expressed as a linear combination of many simple Hermite Regressors, each focusing on one (and only one) orthogonal polynomial. For Gaussian or non-Gaussian cases, this paper formally establishes a “Two-Projection Theorem” allowing the estimation task to be divided into two projection stages: the first projection reveals the effect of model-induced error (caused by under-represented regressor models) while the second projection reveals the extra estimation error due to the (inevitable) input measuring error. The two-projection analysis leads to a closed-form error formula critical for order/error tradeoff. The simulation results not only confirm the theoretical prediction but also demonstrate superiority of KPR over the conventional ridge regression method in MSE reduction.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Perturbation regulated kernel regressors for supervised machine learning\",\"authors\":\"S. Kung, Pei-Yuan Wu\",\"doi\":\"10.1109/MLSP.2012.6349743\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper develops a kernel perturbation-regulated (KPR) regressor based on the errors-in-variables models. KPR offers a strong smoothing capability critical to the robustness of regression or classification results. For Gaussian cases, the notion of orthogonal polynomials is instrumental to optimal estimation and its error analysis. More exactly, the regressor may be expressed as a linear combination of many simple Hermite Regressors, each focusing on one (and only one) orthogonal polynomial. For Gaussian or non-Gaussian cases, this paper formally establishes a “Two-Projection Theorem” allowing the estimation task to be divided into two projection stages: the first projection reveals the effect of model-induced error (caused by under-represented regressor models) while the second projection reveals the extra estimation error due to the (inevitable) input measuring error. The two-projection analysis leads to a closed-form error formula critical for order/error tradeoff. The simulation results not only confirm the theoretical prediction but also demonstrate superiority of KPR over the conventional ridge regression method in MSE reduction.\",\"PeriodicalId\":262601,\"journal\":{\"name\":\"2012 IEEE International Workshop on Machine Learning for Signal Processing\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE International Workshop on Machine Learning for Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MLSP.2012.6349743\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE International Workshop on Machine Learning for Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MLSP.2012.6349743","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Perturbation regulated kernel regressors for supervised machine learning
This paper develops a kernel perturbation-regulated (KPR) regressor based on the errors-in-variables models. KPR offers a strong smoothing capability critical to the robustness of regression or classification results. For Gaussian cases, the notion of orthogonal polynomials is instrumental to optimal estimation and its error analysis. More exactly, the regressor may be expressed as a linear combination of many simple Hermite Regressors, each focusing on one (and only one) orthogonal polynomial. For Gaussian or non-Gaussian cases, this paper formally establishes a “Two-Projection Theorem” allowing the estimation task to be divided into two projection stages: the first projection reveals the effect of model-induced error (caused by under-represented regressor models) while the second projection reveals the extra estimation error due to the (inevitable) input measuring error. The two-projection analysis leads to a closed-form error formula critical for order/error tradeoff. The simulation results not only confirm the theoretical prediction but also demonstrate superiority of KPR over the conventional ridge regression method in MSE reduction.