Comparative Analysis of BERT-base Transformers and Deep Learning Sentiment Prediction Models

Anandan Chinnalagu, A. Durairaj
{"title":"Comparative Analysis of BERT-base Transformers and Deep Learning Sentiment Prediction Models","authors":"Anandan Chinnalagu, A. Durairaj","doi":"10.1109/SMART55829.2022.10047651","DOIUrl":null,"url":null,"abstract":"The state-of-the-art Bidirectional Encoder Representations from Transformers (BERT) and Deep Learning (DL) models are used for Natural Language Processing (NLP) applications. Social media marketing and customers positive sentiments play major role for many online businesses.It is a crucial task for companies to predict customers sentiment based on context from online reviews. Predicting accurate sentiment is a time-consuming and challenging task due to high volume of unstructured customers review dataset. There are many previous experimental results reveals the performance and inaccuracy issues on large scale customer reviews datasets. This paper presents the comparative analysis of experimental research work on BERT, Hybrid fastText-BILSTM, and fastText Trigram models overcome more accurate sentiment prediction challenges. We propose fine-tuned BERT and Hybrid fastText-BILSTM models for large customer review datasets. This comparative analysis results show that the proposed fine-tuned BERT model performs better compare to other DL models in terms of accuracy and other performance measures.","PeriodicalId":431639,"journal":{"name":"2022 11th International Conference on System Modeling & Advancement in Research Trends (SMART)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 11th International Conference on System Modeling & Advancement in Research Trends (SMART)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMART55829.2022.10047651","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The state-of-the-art Bidirectional Encoder Representations from Transformers (BERT) and Deep Learning (DL) models are used for Natural Language Processing (NLP) applications. Social media marketing and customers positive sentiments play major role for many online businesses.It is a crucial task for companies to predict customers sentiment based on context from online reviews. Predicting accurate sentiment is a time-consuming and challenging task due to high volume of unstructured customers review dataset. There are many previous experimental results reveals the performance and inaccuracy issues on large scale customer reviews datasets. This paper presents the comparative analysis of experimental research work on BERT, Hybrid fastText-BILSTM, and fastText Trigram models overcome more accurate sentiment prediction challenges. We propose fine-tuned BERT and Hybrid fastText-BILSTM models for large customer review datasets. This comparative analysis results show that the proposed fine-tuned BERT model performs better compare to other DL models in terms of accuracy and other performance measures.
基于bert的变压器和深度学习情感预测模型的比较分析
来自变形金刚(BERT)和深度学习(DL)模型的最先进的双向编码器表示用于自然语言处理(NLP)应用。社交媒体营销和客户的积极情绪对许多在线企业起着重要作用。对于企业来说,根据在线评论的上下文来预测客户的情绪是一项至关重要的任务。由于大量的非结构化客户评论数据集,预测准确的情绪是一项耗时且具有挑战性的任务。大量的实验结果揭示了大规模客户评论数据集的性能和不准确性问题。本文对BERT、Hybrid fastText- bilstm和fastText Trigram模型的实验研究工作进行了比较分析,这些模型克服了更准确的情感预测挑战。我们为大型客户评论数据集提出了微调BERT和混合fastText-BILSTM模型。对比分析结果表明,与其他深度学习模型相比,所提出的微调BERT模型在精度和其他性能指标方面表现更好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信