概念漂移的贴现专家权重

G. Ditzler, G. Rosen, R. Polikar
{"title":"概念漂移的贴现专家权重","authors":"G. Ditzler, G. Rosen, R. Polikar","doi":"10.1109/CIDUE.2013.6595773","DOIUrl":null,"url":null,"abstract":"Multiple expert systems (MES) have been widely used in machine learning because of their inherent ability to decrease variance and improve generalization performance by receiving advice from more than one expert. However, a typical MES explicitly assumes that training and testing data are independent and identically distributed (iid), which, unfortunately, is often violated in practice when the probability distribution generating the data changes with time. One of the key aspects of any MES algorithm deployed in such environments is the decision rule used to combine the decisions of the experts. Many MES algorithms choose adaptive weighting schemes that adjust the weights of a classifier based on its loss in recent time, or use an average of the experts probabilities. However, in a stochastic setting where the loss of an expert is uncertain at a future point in time, which combiner method is the most reliable? In this work, we show that non-uniform weighting experts can provide a stable upper bound on loss compared to techniques such as a follow-the-Ieader or uniform methodology. Several well-studied MES approaches are tested on a variety of real-world data sets to support and demonstrate the theory.","PeriodicalId":133590,"journal":{"name":"2013 IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Discounted expert weighting for concept drift\",\"authors\":\"G. Ditzler, G. Rosen, R. Polikar\",\"doi\":\"10.1109/CIDUE.2013.6595773\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multiple expert systems (MES) have been widely used in machine learning because of their inherent ability to decrease variance and improve generalization performance by receiving advice from more than one expert. However, a typical MES explicitly assumes that training and testing data are independent and identically distributed (iid), which, unfortunately, is often violated in practice when the probability distribution generating the data changes with time. One of the key aspects of any MES algorithm deployed in such environments is the decision rule used to combine the decisions of the experts. Many MES algorithms choose adaptive weighting schemes that adjust the weights of a classifier based on its loss in recent time, or use an average of the experts probabilities. However, in a stochastic setting where the loss of an expert is uncertain at a future point in time, which combiner method is the most reliable? In this work, we show that non-uniform weighting experts can provide a stable upper bound on loss compared to techniques such as a follow-the-Ieader or uniform methodology. Several well-studied MES approaches are tested on a variety of real-world data sets to support and demonstrate the theory.\",\"PeriodicalId\":133590,\"journal\":{\"name\":\"2013 IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIDUE.2013.6595773\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIDUE.2013.6595773","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

多专家系统(MES)在机器学习中得到了广泛的应用,因为它具有通过接收多个专家的建议来减少方差和提高泛化性能的固有能力。然而,典型的MES明确假设训练和测试数据是独立且同分布的(iid),不幸的是,在实践中,当生成数据的概率分布随时间变化时,这一假设经常被违反。在这种环境中部署的任何MES算法的一个关键方面是用于组合专家决策的决策规则。许多MES算法选择自适应加权方案,根据分类器最近的损失来调整分类器的权重,或者使用专家概率的平均值。然而,在一个专家在未来某个时间点的损失是不确定的随机环境中,哪种组合方法是最可靠的?在这项工作中,我们证明了非均匀加权专家可以提供一个稳定的损失上界,而不是像跟随者或均匀方法这样的技术。几个经过充分研究的MES方法在各种现实世界的数据集上进行了测试,以支持和证明该理论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Discounted expert weighting for concept drift
Multiple expert systems (MES) have been widely used in machine learning because of their inherent ability to decrease variance and improve generalization performance by receiving advice from more than one expert. However, a typical MES explicitly assumes that training and testing data are independent and identically distributed (iid), which, unfortunately, is often violated in practice when the probability distribution generating the data changes with time. One of the key aspects of any MES algorithm deployed in such environments is the decision rule used to combine the decisions of the experts. Many MES algorithms choose adaptive weighting schemes that adjust the weights of a classifier based on its loss in recent time, or use an average of the experts probabilities. However, in a stochastic setting where the loss of an expert is uncertain at a future point in time, which combiner method is the most reliable? In this work, we show that non-uniform weighting experts can provide a stable upper bound on loss compared to techniques such as a follow-the-Ieader or uniform methodology. Several well-studied MES approaches are tested on a variety of real-world data sets to support and demonstrate the theory.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信