未知先验条件下分类器训练的极大极小策略

R. Alaíz-Rodríguez, Jesús Cid-Sueiro
{"title":"未知先验条件下分类器训练的极大极小策略","authors":"R. Alaíz-Rodríguez, Jesús Cid-Sueiro","doi":"10.1109/NNSP.2002.1030036","DOIUrl":null,"url":null,"abstract":"Most supervised learning algorithms are based on the assumption that the training data set reflects the underlying statistical model of the real data. However, this stationarity assumption is not always satisfied in practice: quite frequently, class prior probabilities are not in accordance with the class proportions in the training data set. The minimax approach is based on selecting the classifier that minimize the error probability under the worst case conditions. We propose a two-step learning algorithm to train a neural network in order to estimate the minimax classifier that is robust to changes in the class priors. During the first step, posterior probabilities based on training data priors are estimated. During the second step, class priors are modified in order to minimize a cost function that is asymptotically equivalent to the worst-case error rate. This procedure is illustrated on a softmax-based neural network. Several experimental results show the advantages of the proposed method with respect to other approaches.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Minimax strategies for training classifiers under unknown priors\",\"authors\":\"R. Alaíz-Rodríguez, Jesús Cid-Sueiro\",\"doi\":\"10.1109/NNSP.2002.1030036\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most supervised learning algorithms are based on the assumption that the training data set reflects the underlying statistical model of the real data. However, this stationarity assumption is not always satisfied in practice: quite frequently, class prior probabilities are not in accordance with the class proportions in the training data set. The minimax approach is based on selecting the classifier that minimize the error probability under the worst case conditions. We propose a two-step learning algorithm to train a neural network in order to estimate the minimax classifier that is robust to changes in the class priors. During the first step, posterior probabilities based on training data priors are estimated. During the second step, class priors are modified in order to minimize a cost function that is asymptotically equivalent to the worst-case error rate. This procedure is illustrated on a softmax-based neural network. Several experimental results show the advantages of the proposed method with respect to other approaches.\",\"PeriodicalId\":117945,\"journal\":{\"name\":\"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NNSP.2002.1030036\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.2002.1030036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大多数监督学习算法都是基于训练数据集反映真实数据的底层统计模型的假设。然而,这种平稳性假设在实践中并不总是被满足:很多时候,类先验概率与训练数据集中的类比例不一致。minimax方法的基础是选择在最坏情况下误差概率最小的分类器。我们提出了一种两步学习算法来训练神经网络,以估计对类先验变化具有鲁棒性的极大极小分类器。在第一步中,基于训练数据先验估计后验概率。在第二步中,修改类先验以最小化与最坏情况错误率渐近相等的代价函数。该程序在基于softmax的神经网络上进行了说明。实验结果表明,该方法相对于其他方法具有一定的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Minimax strategies for training classifiers under unknown priors
Most supervised learning algorithms are based on the assumption that the training data set reflects the underlying statistical model of the real data. However, this stationarity assumption is not always satisfied in practice: quite frequently, class prior probabilities are not in accordance with the class proportions in the training data set. The minimax approach is based on selecting the classifier that minimize the error probability under the worst case conditions. We propose a two-step learning algorithm to train a neural network in order to estimate the minimax classifier that is robust to changes in the class priors. During the first step, posterior probabilities based on training data priors are estimated. During the second step, class priors are modified in order to minimize a cost function that is asymptotically equivalent to the worst-case error rate. This procedure is illustrated on a softmax-based neural network. Several experimental results show the advantages of the proposed method with respect to other approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信