Comparing the Wrapper Feature Selection Evaluators on Twitter Sentiment Classification

N. Suchetha, Anupama Nikhil, P. Hrudya
{"title":"Comparing the Wrapper Feature Selection Evaluators on Twitter Sentiment Classification","authors":"N. Suchetha, Anupama Nikhil, P. Hrudya","doi":"10.1109/ICCIDS.2019.8862033","DOIUrl":null,"url":null,"abstract":"The application of machine learning algorithms on text data is challenging in several ways, the greatest being the presence of sparse, high dimensional feature set. Feature selection methods are effective in reducing the dimensionality of the data and helps in improving the computational efficiency and the performance of the learned model. Recently, evolutionary computation (EC) methods have shown success in solving the feature selection problem. However, due to the requirement of a large number of evaluations, EC based feature selection methods on text data are computationally expensive. This paper examines the different evaluation classifiers used for EC based wrapper feature selection methods. A two-stage feature selection method is applied to twitter data for sentiment classification. In the first stage, a filter feature selection method based on Information Gain (IG) is applied. During the second stage, a comparison is made between 4 different EC feature selection methods, Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Cuckoo Search (CS) and Firefly Search, with different classifiers as subset evaluators. LibLinear, K Nearest neighbours (KNN) and Naive Bayes (NB) are the classifiers used for wrapper feature subset evaluation. Also, the time required for evaluating the feature subset for the chosen classifiers is computed. Finally, the effect of the application of this combined feature selection approach is evaluated using six different learners. Results demonstrate that LibLinear is computationally efficient and achieves the best performance.","PeriodicalId":196915,"journal":{"name":"2019 International Conference on Computational Intelligence in Data Science (ICCIDS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Computational Intelligence in Data Science (ICCIDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIDS.2019.8862033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 25

Abstract

The application of machine learning algorithms on text data is challenging in several ways, the greatest being the presence of sparse, high dimensional feature set. Feature selection methods are effective in reducing the dimensionality of the data and helps in improving the computational efficiency and the performance of the learned model. Recently, evolutionary computation (EC) methods have shown success in solving the feature selection problem. However, due to the requirement of a large number of evaluations, EC based feature selection methods on text data are computationally expensive. This paper examines the different evaluation classifiers used for EC based wrapper feature selection methods. A two-stage feature selection method is applied to twitter data for sentiment classification. In the first stage, a filter feature selection method based on Information Gain (IG) is applied. During the second stage, a comparison is made between 4 different EC feature selection methods, Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Cuckoo Search (CS) and Firefly Search, with different classifiers as subset evaluators. LibLinear, K Nearest neighbours (KNN) and Naive Bayes (NB) are the classifiers used for wrapper feature subset evaluation. Also, the time required for evaluating the feature subset for the chosen classifiers is computed. Finally, the effect of the application of this combined feature selection approach is evaluated using six different learners. Results demonstrate that LibLinear is computationally efficient and achieves the best performance.
Twitter情感分类中包装器特征选择评估器的比较
机器学习算法在文本数据上的应用在几个方面都具有挑战性,最大的挑战是存在稀疏的高维特征集。特征选择方法可以有效地降低数据的维数,有助于提高计算效率和学习模型的性能。近年来,进化计算(EC)方法在解决特征选择问题上取得了成功。然而,由于需要进行大量的评估,基于EC的文本数据特征选择方法的计算成本很高。本文研究了用于基于EC的包装器特征选择方法的不同评估分类器。将两阶段特征选择方法应用于twitter数据的情感分类。第一阶段采用基于信息增益(Information Gain, IG)的滤波器特征选择方法。在第二阶段,比较了粒子群优化(PSO)、蚁群优化(ACO)、布谷鸟搜索(CS)和萤火虫搜索(Firefly Search) 4种不同的EC特征选择方法,采用不同的分类器作为子集评估器。LibLinear, K近邻(KNN)和朴素贝叶斯(NB)是用于包装器特征子集评估的分类器。此外,还计算了评估所选分类器的特征子集所需的时间。最后,使用六种不同的学习器来评估这种组合特征选择方法的应用效果。结果表明,LibLinear算法计算效率高,达到了最佳性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信