Radhakrishnan Venkatakrishnan, Mahsa Goodarzi, M. A. Canbaz
{"title":"Exploring Large Language Models’ Emotion Detection Abilities: Use Cases From the Middle East","authors":"Radhakrishnan Venkatakrishnan, Mahsa Goodarzi, M. A. Canbaz","doi":"10.1109/CAI54212.2023.00110","DOIUrl":null,"url":null,"abstract":"Emotion detection is a critical component in allowing machines to understand and respond to human emotions. In this paper, we explore the potential of pre-trained transformer-based language models, namely, GPT3.5 and RoBERTa for emotion detection in natural language processing. Specifically, we focus on examining the quality of emotion detection in LLMs and their potential as automatic labeling generators to improve accuracy. The emotional response to two significant events, the murder of Zhina (Mahsa) Amini in Iran and the earthquake in Turkey and Syria, is analyzed. We observe that GPT’s generative nature hinders its performance in fine-grained emotion classification, whereas RoBERTa’s fine-tuning abilities and extensive pre-training specifically for emotions enable more accurate predictions within a limited set of emotional labels.","PeriodicalId":129324,"journal":{"name":"2023 IEEE Conference on Artificial Intelligence (CAI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference on Artificial Intelligence (CAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAI54212.2023.00110","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Emotion detection is a critical component in allowing machines to understand and respond to human emotions. In this paper, we explore the potential of pre-trained transformer-based language models, namely, GPT3.5 and RoBERTa for emotion detection in natural language processing. Specifically, we focus on examining the quality of emotion detection in LLMs and their potential as automatic labeling generators to improve accuracy. The emotional response to two significant events, the murder of Zhina (Mahsa) Amini in Iran and the earthquake in Turkey and Syria, is analyzed. We observe that GPT’s generative nature hinders its performance in fine-grained emotion classification, whereas RoBERTa’s fine-tuning abilities and extensive pre-training specifically for emotions enable more accurate predictions within a limited set of emotional labels.