{"title":"Comparative Analysis of BERT-base Transformers and Deep Learning Sentiment Prediction Models","authors":"Anandan Chinnalagu, A. Durairaj","doi":"10.1109/SMART55829.2022.10047651","DOIUrl":null,"url":null,"abstract":"The state-of-the-art Bidirectional Encoder Representations from Transformers (BERT) and Deep Learning (DL) models are used for Natural Language Processing (NLP) applications. Social media marketing and customers positive sentiments play major role for many online businesses.It is a crucial task for companies to predict customers sentiment based on context from online reviews. Predicting accurate sentiment is a time-consuming and challenging task due to high volume of unstructured customers review dataset. There are many previous experimental results reveals the performance and inaccuracy issues on large scale customer reviews datasets. This paper presents the comparative analysis of experimental research work on BERT, Hybrid fastText-BILSTM, and fastText Trigram models overcome more accurate sentiment prediction challenges. We propose fine-tuned BERT and Hybrid fastText-BILSTM models for large customer review datasets. This comparative analysis results show that the proposed fine-tuned BERT model performs better compare to other DL models in terms of accuracy and other performance measures.","PeriodicalId":431639,"journal":{"name":"2022 11th International Conference on System Modeling & Advancement in Research Trends (SMART)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 11th International Conference on System Modeling & Advancement in Research Trends (SMART)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMART55829.2022.10047651","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The state-of-the-art Bidirectional Encoder Representations from Transformers (BERT) and Deep Learning (DL) models are used for Natural Language Processing (NLP) applications. Social media marketing and customers positive sentiments play major role for many online businesses.It is a crucial task for companies to predict customers sentiment based on context from online reviews. Predicting accurate sentiment is a time-consuming and challenging task due to high volume of unstructured customers review dataset. There are many previous experimental results reveals the performance and inaccuracy issues on large scale customer reviews datasets. This paper presents the comparative analysis of experimental research work on BERT, Hybrid fastText-BILSTM, and fastText Trigram models overcome more accurate sentiment prediction challenges. We propose fine-tuned BERT and Hybrid fastText-BILSTM models for large customer review datasets. This comparative analysis results show that the proposed fine-tuned BERT model performs better compare to other DL models in terms of accuracy and other performance measures.