{"title":"Reinforcement Learning Based Decision Tree Induction Over Data Streams with Concept Drifts","authors":"Christopher Blake, Eirini Ntoutsi","doi":"10.1109/ICBK.2018.00051","DOIUrl":null,"url":null,"abstract":"Traditional decision tree induction algorithms are greedy with locally-optimal decisions made at each node based on splitting criteria like information gain or Gini index. A reinforcement learning approach to decision tree building seems more suitable as it aims at maximizing the long-term return rather than optimizing a short-term goal. In this paper, a reinforcement learning approach is used to train a Markov Decision Process (MDP), which enables the creation of a short and highly accurate decision tree. Moreover, the use of reinforcement learning naturally enables additional functionality such as learning under concept drifts, feature importance weighting, inclusion of new features and forgetting of obsolete ones as well as classification with incomplete data. To deal with concept drifts, a reset operation is proposed that allows for local re-learning of outdated parts of the tree. Preliminary experiments show that such an approach allows for better adaptation to concept drifts and changing feature spaces, while still producing a short and highly accurate decision tree.","PeriodicalId":144958,"journal":{"name":"2018 IEEE International Conference on Big Knowledge (ICBK)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Big Knowledge (ICBK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICBK.2018.00051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Traditional decision tree induction algorithms are greedy with locally-optimal decisions made at each node based on splitting criteria like information gain or Gini index. A reinforcement learning approach to decision tree building seems more suitable as it aims at maximizing the long-term return rather than optimizing a short-term goal. In this paper, a reinforcement learning approach is used to train a Markov Decision Process (MDP), which enables the creation of a short and highly accurate decision tree. Moreover, the use of reinforcement learning naturally enables additional functionality such as learning under concept drifts, feature importance weighting, inclusion of new features and forgetting of obsolete ones as well as classification with incomplete data. To deal with concept drifts, a reset operation is proposed that allows for local re-learning of outdated parts of the tree. Preliminary experiments show that such an approach allows for better adaptation to concept drifts and changing feature spaces, while still producing a short and highly accurate decision tree.