An Efficient Fine-tuning of Generative Language Model for Aspect-Based Sentiment Analysis

Chaelyn Lee, Hanyong Lee, Kyumin Kim, Sojeong Kim, Jae-Soung Lee
{"title":"An Efficient Fine-tuning of Generative Language Model for Aspect-Based Sentiment Analysis","authors":"Chaelyn Lee, Hanyong Lee, Kyumin Kim, Sojeong Kim, Jae-Soung Lee","doi":"10.1109/ICCE59016.2024.10444216","DOIUrl":null,"url":null,"abstract":"Sentiment analysis is considered as an important study where be able to automatically extract the polarity of consumers or users' opinions and use it as important data for decision-making in companies or organizations. It has further developed into Aspect-Based Sentiment Analysis research that predicts the polarity for a specific aspect within a sentence. Recently, research has been conducted to convert emotion analysis based on classification work to a model that obtains more diverse and accurate emotion expressions using generative language models. We propose a method of fine-tuning by introducing Low-Rank Adaptation (LoRA) into a generative language model to improve the performance of these generative-based ABSA models and enable efficient learning. In this paper, GloABSA (GPT2+LoRA Aspect-Based Sentiment Analysis) aims at improving the learning efficiency of the previously proposed GPTABSA model. In this study, LoRA is introduced and fine-tuned to the GPT2 model to predict aspects and polarities using enhanced contextual information, and to reduce the number of parameters to enable efficient learning. Experiments using a benchmark dataset of ABSA, let us show that our proposed method outperforms previous studies and significantly reduces the number of parameters.","PeriodicalId":518694,"journal":{"name":"2024 IEEE International Conference on Consumer Electronics (ICCE)","volume":"65 10","pages":"1-4"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 IEEE International Conference on Consumer Electronics (ICCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCE59016.2024.10444216","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Sentiment analysis is considered as an important study where be able to automatically extract the polarity of consumers or users' opinions and use it as important data for decision-making in companies or organizations. It has further developed into Aspect-Based Sentiment Analysis research that predicts the polarity for a specific aspect within a sentence. Recently, research has been conducted to convert emotion analysis based on classification work to a model that obtains more diverse and accurate emotion expressions using generative language models. We propose a method of fine-tuning by introducing Low-Rank Adaptation (LoRA) into a generative language model to improve the performance of these generative-based ABSA models and enable efficient learning. In this paper, GloABSA (GPT2+LoRA Aspect-Based Sentiment Analysis) aims at improving the learning efficiency of the previously proposed GPTABSA model. In this study, LoRA is introduced and fine-tuned to the GPT2 model to predict aspects and polarities using enhanced contextual information, and to reduce the number of parameters to enable efficient learning. Experiments using a benchmark dataset of ABSA, let us show that our proposed method outperforms previous studies and significantly reduces the number of parameters.
基于方面的情感分析生成语言模型的高效微调
情感分析被认为是一项重要的研究,能够自动提取消费者或用户意见的极性,并将其作为公司或组织决策的重要数据。它还进一步发展成为基于方面的情感分析研究,可预测句子中特定方面的极性。最近,人们开始研究如何将基于分类工作的情感分析转换为使用生成语言模型获得更多样化和更准确情感表达的模型。我们提出了一种微调方法,即在生成式语言模型中引入低级自适应(Low-Rank Adaptation,LoRA),以提高这些基于生成式的 ABSA 模型的性能,并实现高效学习。本文中的 GloABSA(GPT2+LoRA 基于方面的情感分析)旨在提高之前提出的 GPTABSA 模型的学习效率。本研究引入了 LoRA,并对 GPT2 模型进行了微调,以利用增强的上下文信息预测方面和极性,并减少参数数量,从而提高学习效率。使用 ABSA 基准数据集进行的实验表明,我们提出的方法优于以往的研究,并显著减少了参数数量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信