Kausik Das, Abhijit Guha Roy, J. Chatterjee, D. Sheet
{"title":"Landscaping of random forests through controlled deforestation","authors":"Kausik Das, Abhijit Guha Roy, J. Chatterjee, D. Sheet","doi":"10.1109/NCC.2016.7561112","DOIUrl":null,"url":null,"abstract":"Random forest (RF) is an ensemble learner constructed using a set of decision trees, where each tree is trained using randomly bootstrapped samples and aggregated to provide a decision. While the generalization error is reduced by increasing the number of trees in a RF, it substantially increases the testing time complexity, inhibiting its fast deployment in practical applications. In this paper, we propose a post-training optimization technique termed landscaping of RF for reducing computational complexity by compensating for trees associated with similar decision boundary. This allows faster deployment of the RF without compromising its performance. Landscaping is achieved through a two stage mechanism: (i) computation of decision similarity between all pairs of trees in the RF, and (ii) deletion of the computationally expensive tree in the RF with decision bias compensation for the removed tree. Performance of the proposed methodology was evaluated using three publicly available datasets. The RF performance before and after landscaping over the datasets was observed to have an error of 0.1084 ± 0.03 and 0.1087 ± 0.03, respectively, while testing times of the RF before landscaping was 2.5508 ± 0.08 sec. and 0.9066 ± 0.19 sec. after landscaping with 32 - 76% reduction in execution time. These results strongly substantiates our claim of achieving deployment speedup without compromising the decision quality with landscaping of RF through controlled deforestation.","PeriodicalId":279637,"journal":{"name":"2016 Twenty Second National Conference on Communication (NCC)","volume":"478 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Twenty Second National Conference on Communication (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC.2016.7561112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Random forest (RF) is an ensemble learner constructed using a set of decision trees, where each tree is trained using randomly bootstrapped samples and aggregated to provide a decision. While the generalization error is reduced by increasing the number of trees in a RF, it substantially increases the testing time complexity, inhibiting its fast deployment in practical applications. In this paper, we propose a post-training optimization technique termed landscaping of RF for reducing computational complexity by compensating for trees associated with similar decision boundary. This allows faster deployment of the RF without compromising its performance. Landscaping is achieved through a two stage mechanism: (i) computation of decision similarity between all pairs of trees in the RF, and (ii) deletion of the computationally expensive tree in the RF with decision bias compensation for the removed tree. Performance of the proposed methodology was evaluated using three publicly available datasets. The RF performance before and after landscaping over the datasets was observed to have an error of 0.1084 ± 0.03 and 0.1087 ± 0.03, respectively, while testing times of the RF before landscaping was 2.5508 ± 0.08 sec. and 0.9066 ± 0.19 sec. after landscaping with 32 - 76% reduction in execution time. These results strongly substantiates our claim of achieving deployment speedup without compromising the decision quality with landscaping of RF through controlled deforestation.