Performance Evaluation of Different Word Embedding Techniques Across Machine Learning and Deep Learning Models

Tanmoy Mazumder, Shawan Das, Md. Hasibur Rahman, Tanjina Helaly, Tanmoy Sarkar Pias
{"title":"Performance Evaluation of Different Word Embedding Techniques Across Machine Learning and Deep Learning Models","authors":"Tanmoy Mazumder, Shawan Das, Md. Hasibur Rahman, Tanjina Helaly, Tanmoy Sarkar Pias","doi":"10.1109/ICCIT57492.2022.10055572","DOIUrl":null,"url":null,"abstract":"Sentiment analysis is one of the core fields of Natural Language Processing(NLP). Numerous machine learning and deep learning algorithms have been developed to achieve this task. Generally, deep learning models perform better in this task as they are trained on massive amounts of data. This, however, also poses a disadvantage as collecting sufficient amounts of data is a challenge and training with this data requires devices with high computational power. Word embedding is a vital step in applying machine learning models for NLP tasks. Different word embedding techniques affect the performance of machine learning algorithms. This paper evaluates GloVe, CountVectorizer, and TF-IDF embedding techniques with multiple machine learning models and proves that the right combination of embedding technique and machine learning model(TF-IDF+Logistic Regression: 87.75% accuracy) can achieve nearly the same performance or more as deep learning models (LSTM: 87.89%).","PeriodicalId":255498,"journal":{"name":"2022 25th International Conference on Computer and Information Technology (ICCIT)","volume":"42 11-12","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th International Conference on Computer and Information Technology (ICCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIT57492.2022.10055572","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Sentiment analysis is one of the core fields of Natural Language Processing(NLP). Numerous machine learning and deep learning algorithms have been developed to achieve this task. Generally, deep learning models perform better in this task as they are trained on massive amounts of data. This, however, also poses a disadvantage as collecting sufficient amounts of data is a challenge and training with this data requires devices with high computational power. Word embedding is a vital step in applying machine learning models for NLP tasks. Different word embedding techniques affect the performance of machine learning algorithms. This paper evaluates GloVe, CountVectorizer, and TF-IDF embedding techniques with multiple machine learning models and proves that the right combination of embedding technique and machine learning model(TF-IDF+Logistic Regression: 87.75% accuracy) can achieve nearly the same performance or more as deep learning models (LSTM: 87.89%).
跨机器学习和深度学习模型的不同词嵌入技术性能评价
情感分析是自然语言处理(NLP)的核心领域之一。已经开发了许多机器学习和深度学习算法来实现这一任务。一般来说,深度学习模型在这项任务中表现更好,因为它们是在大量数据上训练的。然而,这也带来了一个缺点,因为收集足够数量的数据是一个挑战,并且使用这些数据进行训练需要具有高计算能力的设备。词嵌入是将机器学习模型应用于自然语言处理任务的重要一步。不同的词嵌入技术会影响机器学习算法的性能。本文用多个机器学习模型对GloVe、CountVectorizer和TF-IDF嵌入技术进行了评估,并证明了嵌入技术和机器学习模型的正确组合(TF-IDF+Logistic Regression: 87.75%的准确率)可以达到与深度学习模型(LSTM: 87.89%)几乎相同或更高的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信