Tree Induction Vs Logistic Regression: A Learning Curve Analysis

C. Perlich, F. Provost, J. Simonoff
{"title":"Tree Induction Vs Logistic Regression: A Learning Curve Analysis","authors":"C. Perlich, F. Provost, J. Simonoff","doi":"10.1162/153244304322972694","DOIUrl":null,"url":null,"abstract":"Tree induction and logistic regression are two standard, off-the-shelf methods for building models for classification. We present a large-scale experimental comparison of logistic regression and tree induction, assessing classification accuracy and the quality of rankings based on class-membership probabilities. We use a learning-curve analysis to examine the relationship of these measures to the size of the training set. The results of the study show several things. (1) Contrary to some prior observations, logistic regression does not generally outperform tree induction. (2) More specifically, and not surprisingly, logistic regression is better for smaller training sets and tree induction for larger data sets. Importantly, this often holds for training sets drawn from the same domain (that is, the learning curves cross), so conclusions about induction-algorithm superiority on a given domain must be based on an analysis of the learning curves. (3) Contrary to conventional wisdom, tree induction is effective at producing probability-based rankings, although apparently comparatively less so for a given training-set size than at making classifications. Finally, (4) the domains on which tree induction and logistic regression are ultimately preferable can be characterized surprisingly well by a simple measure of the separability of signal from noise.","PeriodicalId":124312,"journal":{"name":"New York University Stern School of Business Research Paper Series","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"375","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"New York University Stern School of Business Research Paper Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/153244304322972694","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 375

Abstract

Tree induction and logistic regression are two standard, off-the-shelf methods for building models for classification. We present a large-scale experimental comparison of logistic regression and tree induction, assessing classification accuracy and the quality of rankings based on class-membership probabilities. We use a learning-curve analysis to examine the relationship of these measures to the size of the training set. The results of the study show several things. (1) Contrary to some prior observations, logistic regression does not generally outperform tree induction. (2) More specifically, and not surprisingly, logistic regression is better for smaller training sets and tree induction for larger data sets. Importantly, this often holds for training sets drawn from the same domain (that is, the learning curves cross), so conclusions about induction-algorithm superiority on a given domain must be based on an analysis of the learning curves. (3) Contrary to conventional wisdom, tree induction is effective at producing probability-based rankings, although apparently comparatively less so for a given training-set size than at making classifications. Finally, (4) the domains on which tree induction and logistic regression are ultimately preferable can be characterized surprisingly well by a simple measure of the separability of signal from noise.
树归纳与逻辑回归:学习曲线分析
树归纳和逻辑回归是建立分类模型的两种标准的现成方法。我们提出了逻辑回归和树归纳的大规模实验比较,评估分类准确性和基于类隶属概率的排名质量。我们使用学习曲线分析来检验这些度量与训练集大小的关系。研究结果显示了几点。(1)与一些先前的观察相反,逻辑回归通常不会优于树归纳。(2)更具体地说,逻辑回归对于较小的训练集更好,而树归纳法对于较大的数据集更好。重要的是,这通常适用于从同一领域绘制的训练集(即学习曲线交叉),因此在给定领域上关于归纳算法优越性的结论必须基于对学习曲线的分析。(3)与传统观点相反,树归纳法在产生基于概率的排名方面是有效的,尽管对于给定的训练集大小,树归纳法的效果明显不如分类法。最后,(4)树归纳和逻辑回归最终更可取的域可以通过信号与噪声的可分性的简单度量来很好地表征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信