{"title":"利用基于注意力的 RNN 和变换器嵌入对学生反馈进行情感分析","authors":"Imad Zyout, Mo’ath Zyout","doi":"10.11591/ijai.v13.i2.pp2173-2184","DOIUrl":null,"url":null,"abstract":"Sentiment analysis systems aim to assess people’s opinions across various domains by collecting and categorizing feedback and reviews. In our study, researchers put forward a sentiment analysis system that leverages three distinct embedding techniques: automatic, global vectors (GloVe) for word representation, and bidirectional encoder representations from transformers (BERT). This system features an attention layer, with the best model chosen through rigorous comparisons. In developing the sentiment analysis model, we employed a hybrid dataset comprising students’ feedback and comments. This dataset comprises 3,820 comments, including 2,773 from formal evaluations and 1,047 generated by ChatGPT and prompting engineering. Our main motivation for integrating generative AI was to balance both positive and negative comments. We also explored recurrent neural network (RNN), gated recurrent unit (GRU), long short-term memory (LSTM), and bidirectional long short-term memory (Bi-LSTM), with and without pre-trained GloVe embedding. These techniques produced F-scores ranging from 67% to 69%. On the other hand, the sentiment model based on BERT, particularly its KERAS implementation, achieved higher F-scores ranging from 83% to 87%. The Bi-LSTM architecture outperformed other models and the inclusion of an attention layer further enhanced the performance, resulting in F-scores of 89% and 88% from the Bi-LSTM-BERT sentiment models, respectively.","PeriodicalId":507934,"journal":{"name":"IAES International Journal of Artificial Intelligence (IJ-AI)","volume":"5 12","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sentiment analysis of student feedback using attention-based RNN and transformer embedding\",\"authors\":\"Imad Zyout, Mo’ath Zyout\",\"doi\":\"10.11591/ijai.v13.i2.pp2173-2184\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sentiment analysis systems aim to assess people’s opinions across various domains by collecting and categorizing feedback and reviews. In our study, researchers put forward a sentiment analysis system that leverages three distinct embedding techniques: automatic, global vectors (GloVe) for word representation, and bidirectional encoder representations from transformers (BERT). This system features an attention layer, with the best model chosen through rigorous comparisons. In developing the sentiment analysis model, we employed a hybrid dataset comprising students’ feedback and comments. This dataset comprises 3,820 comments, including 2,773 from formal evaluations and 1,047 generated by ChatGPT and prompting engineering. Our main motivation for integrating generative AI was to balance both positive and negative comments. We also explored recurrent neural network (RNN), gated recurrent unit (GRU), long short-term memory (LSTM), and bidirectional long short-term memory (Bi-LSTM), with and without pre-trained GloVe embedding. These techniques produced F-scores ranging from 67% to 69%. On the other hand, the sentiment model based on BERT, particularly its KERAS implementation, achieved higher F-scores ranging from 83% to 87%. The Bi-LSTM architecture outperformed other models and the inclusion of an attention layer further enhanced the performance, resulting in F-scores of 89% and 88% from the Bi-LSTM-BERT sentiment models, respectively.\",\"PeriodicalId\":507934,\"journal\":{\"name\":\"IAES International Journal of Artificial Intelligence (IJ-AI)\",\"volume\":\"5 12\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IAES International Journal of Artificial Intelligence (IJ-AI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.11591/ijai.v13.i2.pp2173-2184\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IAES International Journal of Artificial Intelligence (IJ-AI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11591/ijai.v13.i2.pp2173-2184","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
情感分析系统旨在通过收集反馈和评论并对其进行分类,评估人们在不同领域的观点。在我们的研究中,研究人员提出了一种情感分析系统,该系统利用了三种不同的嵌入技术:自动嵌入、用于单词表示的全局向量(GloVe)以及来自变换器的双向编码器表示(BERT)。该系统有一个关注层,通过严格的比较选出最佳模型。在开发情感分析模型时,我们采用了一个由学生反馈和评论组成的混合数据集。该数据集包含 3,820 条评论,其中 2,773 条来自正式评价,1,047 条由 ChatGPT 和提示工程生成。我们整合生成式人工智能的主要动机是平衡正面和负面评论。我们还探索了递归神经网络 (RNN)、门控递归单元 (GRU)、长短期记忆 (LSTM) 和双向长短期记忆 (Bi-LSTM),并使用和不使用预先训练的 GloVe 嵌入。这些技术产生的 F 分数从 67% 到 69% 不等。另一方面,基于 BERT 的情感模型,特别是其 KERAS 实现,取得了更高的 F 分数,从 83% 到 87%。Bi-LSTM 架构的性能优于其他模型,而加入注意力层则进一步提高了性能,因此 Bi-LSTM-BERT 情感模型的 F 分数分别达到了 89% 和 88%。
Sentiment analysis of student feedback using attention-based RNN and transformer embedding
Sentiment analysis systems aim to assess people’s opinions across various domains by collecting and categorizing feedback and reviews. In our study, researchers put forward a sentiment analysis system that leverages three distinct embedding techniques: automatic, global vectors (GloVe) for word representation, and bidirectional encoder representations from transformers (BERT). This system features an attention layer, with the best model chosen through rigorous comparisons. In developing the sentiment analysis model, we employed a hybrid dataset comprising students’ feedback and comments. This dataset comprises 3,820 comments, including 2,773 from formal evaluations and 1,047 generated by ChatGPT and prompting engineering. Our main motivation for integrating generative AI was to balance both positive and negative comments. We also explored recurrent neural network (RNN), gated recurrent unit (GRU), long short-term memory (LSTM), and bidirectional long short-term memory (Bi-LSTM), with and without pre-trained GloVe embedding. These techniques produced F-scores ranging from 67% to 69%. On the other hand, the sentiment model based on BERT, particularly its KERAS implementation, achieved higher F-scores ranging from 83% to 87%. The Bi-LSTM architecture outperformed other models and the inclusion of an attention layer further enhanced the performance, resulting in F-scores of 89% and 88% from the Bi-LSTM-BERT sentiment models, respectively.