DL-TBAM: Deep Learning Transformer based Architecture Model for Sentiment Analysis on Tamil-English Dataset

M. Sangeetha, K. Nimala
{"title":"DL-TBAM: Deep Learning Transformer based Architecture Model for Sentiment Analysis on Tamil-English Dataset","authors":"M. Sangeetha, K. Nimala","doi":"10.3233/jifs-236971","DOIUrl":null,"url":null,"abstract":"NLP, or natural language processing, is a subfield of AI that aims to equip computers with the ability to understand and analyze human language. Sentiment analysis is a widely used application of NLP, particularly for examining attitudes expressed in online conversations. Nevertheless, many social media comments are written in languages that are not native to the authors, making sentiment analysis more difficult, especially for languages with limited resources, such as Tamil. To tackle this issue, a code-mixed and sentiment-annotated corpus in Tamil and English was created. This article will explain how the corpus was established, including the process of data collection and the assignment of polarities. The article will also explore the agreement between annotators and the results of sentiment analysis performed on the corpus. This work signifies various performance metrics such as precision, recall, support, and F1-score for the transformer-based model such as BERT, RoBerta, and XLM-RoBerta. Among the various models, XLM-Robert shows slightly significant positive results on the code-mixed corpus when compared to the state of art models.","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent & Fuzzy Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/jifs-236971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

NLP, or natural language processing, is a subfield of AI that aims to equip computers with the ability to understand and analyze human language. Sentiment analysis is a widely used application of NLP, particularly for examining attitudes expressed in online conversations. Nevertheless, many social media comments are written in languages that are not native to the authors, making sentiment analysis more difficult, especially for languages with limited resources, such as Tamil. To tackle this issue, a code-mixed and sentiment-annotated corpus in Tamil and English was created. This article will explain how the corpus was established, including the process of data collection and the assignment of polarities. The article will also explore the agreement between annotators and the results of sentiment analysis performed on the corpus. This work signifies various performance metrics such as precision, recall, support, and F1-score for the transformer-based model such as BERT, RoBerta, and XLM-RoBerta. Among the various models, XLM-Robert shows slightly significant positive results on the code-mixed corpus when compared to the state of art models.
DL-TBAM:基于深度学习变换器的泰米尔语-英语数据集情感分析架构模型
NLP 或自然语言处理是人工智能的一个子领域,旨在使计算机具备理解和分析人类语言的能力。情感分析是 NLP 的一种广泛应用,特别是用于研究在线对话中表达的态度。然而,许多社交媒体评论都是用作者的非母语语言撰写的,这就增加了情感分析的难度,尤其是对于资源有限的语言,如泰米尔语。为了解决这个问题,我们创建了泰米尔语和英语的代码混合和情感注释语料库。本文将解释该语料库是如何建立的,包括数据收集过程和极性的分配。文章还将探讨注释者之间的一致性以及对语料库进行情感分析的结果。这项工作显示了基于转换器的模型(如 BERT、RoBerta 和 XLM-RoBerta)的各种性能指标,如精确度、召回率、支持度和 F1 分数。在各种模型中,与现有模型相比,XLM-Robert 在代码混合语料库中显示出略微显著的积极结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信