{"title":"基于距离、实例、属性和密度加权的回归任务降噪方法","authors":"M. Kordos, A. Rusiecki, M. Blachnik","doi":"10.1109/CYBCONF.2015.7175909","DOIUrl":null,"url":null,"abstract":"The idea presented in this paper is to gradually decrease the influence of selected training vectors on the model: if there is a higher probability that a given vector is an outlier, its influence on training the model should be limited. This approach can be used in two ways: in the input space (e.g. with such methods as k-NN for prediction and for instance selection) and in the output space (e.g. while calculating the error of an MLP neural network). The strong point of this gradual influence reduction is that it is not required to set a crisp outlier definition (outliers are difficult to be optimally defined). Moreover, according to the presented experimental results, this approach outperforms other methods while learning the model representation from noisy data.","PeriodicalId":177233,"journal":{"name":"2015 IEEE 2nd International Conference on Cybernetics (CYBCONF)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Noise reduction in regression tasks with distance, instance, attribute and density weighting\",\"authors\":\"M. Kordos, A. Rusiecki, M. Blachnik\",\"doi\":\"10.1109/CYBCONF.2015.7175909\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The idea presented in this paper is to gradually decrease the influence of selected training vectors on the model: if there is a higher probability that a given vector is an outlier, its influence on training the model should be limited. This approach can be used in two ways: in the input space (e.g. with such methods as k-NN for prediction and for instance selection) and in the output space (e.g. while calculating the error of an MLP neural network). The strong point of this gradual influence reduction is that it is not required to set a crisp outlier definition (outliers are difficult to be optimally defined). Moreover, according to the presented experimental results, this approach outperforms other methods while learning the model representation from noisy data.\",\"PeriodicalId\":177233,\"journal\":{\"name\":\"2015 IEEE 2nd International Conference on Cybernetics (CYBCONF)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-06-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE 2nd International Conference on Cybernetics (CYBCONF)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CYBCONF.2015.7175909\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE 2nd International Conference on Cybernetics (CYBCONF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CYBCONF.2015.7175909","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Noise reduction in regression tasks with distance, instance, attribute and density weighting
The idea presented in this paper is to gradually decrease the influence of selected training vectors on the model: if there is a higher probability that a given vector is an outlier, its influence on training the model should be limited. This approach can be used in two ways: in the input space (e.g. with such methods as k-NN for prediction and for instance selection) and in the output space (e.g. while calculating the error of an MLP neural network). The strong point of this gradual influence reduction is that it is not required to set a crisp outlier definition (outliers are difficult to be optimally defined). Moreover, according to the presented experimental results, this approach outperforms other methods while learning the model representation from noisy data.