Reinforcement Learning Based Decision Tree Induction Over Data Streams with Concept Drifts

Christopher Blake, Eirini Ntoutsi
{"title":"Reinforcement Learning Based Decision Tree Induction Over Data Streams with Concept Drifts","authors":"Christopher Blake, Eirini Ntoutsi","doi":"10.1109/ICBK.2018.00051","DOIUrl":null,"url":null,"abstract":"Traditional decision tree induction algorithms are greedy with locally-optimal decisions made at each node based on splitting criteria like information gain or Gini index. A reinforcement learning approach to decision tree building seems more suitable as it aims at maximizing the long-term return rather than optimizing a short-term goal. In this paper, a reinforcement learning approach is used to train a Markov Decision Process (MDP), which enables the creation of a short and highly accurate decision tree. Moreover, the use of reinforcement learning naturally enables additional functionality such as learning under concept drifts, feature importance weighting, inclusion of new features and forgetting of obsolete ones as well as classification with incomplete data. To deal with concept drifts, a reset operation is proposed that allows for local re-learning of outdated parts of the tree. Preliminary experiments show that such an approach allows for better adaptation to concept drifts and changing feature spaces, while still producing a short and highly accurate decision tree.","PeriodicalId":144958,"journal":{"name":"2018 IEEE International Conference on Big Knowledge (ICBK)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Big Knowledge (ICBK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICBK.2018.00051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Traditional decision tree induction algorithms are greedy with locally-optimal decisions made at each node based on splitting criteria like information gain or Gini index. A reinforcement learning approach to decision tree building seems more suitable as it aims at maximizing the long-term return rather than optimizing a short-term goal. In this paper, a reinforcement learning approach is used to train a Markov Decision Process (MDP), which enables the creation of a short and highly accurate decision tree. Moreover, the use of reinforcement learning naturally enables additional functionality such as learning under concept drifts, feature importance weighting, inclusion of new features and forgetting of obsolete ones as well as classification with incomplete data. To deal with concept drifts, a reset operation is proposed that allows for local re-learning of outdated parts of the tree. Preliminary experiments show that such an approach allows for better adaptation to concept drifts and changing feature spaces, while still producing a short and highly accurate decision tree.
基于强化学习的概念漂移数据流决策树归纳
传统的决策树归纳算法是贪婪的,基于信息增益或基尼指数等分割标准在每个节点上做出局部最优决策。构建决策树的强化学习方法似乎更合适,因为它旨在最大化长期回报,而不是优化短期目标。本文使用强化学习方法来训练马尔可夫决策过程(MDP),从而能够创建一个简短且高度准确的决策树。此外,使用强化学习自然可以实现额外的功能,如概念漂移下的学习、特征重要性加权、新特征的包含和过时特征的遗忘以及对不完整数据的分类。为了处理概念漂移,提出了一种重置操作,允许对树的过时部分进行局部重新学习。初步实验表明,这种方法可以更好地适应概念漂移和变化的特征空间,同时仍然产生一个短而高精度的决策树。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信