Chaelyn Lee, Hanyong Lee, Kyumin Kim, Sojeong Kim, Jae-Soung Lee
{"title":"基于方面的情感分析生成语言模型的高效微调","authors":"Chaelyn Lee, Hanyong Lee, Kyumin Kim, Sojeong Kim, Jae-Soung Lee","doi":"10.1109/ICCE59016.2024.10444216","DOIUrl":null,"url":null,"abstract":"Sentiment analysis is considered as an important study where be able to automatically extract the polarity of consumers or users' opinions and use it as important data for decision-making in companies or organizations. It has further developed into Aspect-Based Sentiment Analysis research that predicts the polarity for a specific aspect within a sentence. Recently, research has been conducted to convert emotion analysis based on classification work to a model that obtains more diverse and accurate emotion expressions using generative language models. We propose a method of fine-tuning by introducing Low-Rank Adaptation (LoRA) into a generative language model to improve the performance of these generative-based ABSA models and enable efficient learning. In this paper, GloABSA (GPT2+LoRA Aspect-Based Sentiment Analysis) aims at improving the learning efficiency of the previously proposed GPTABSA model. In this study, LoRA is introduced and fine-tuned to the GPT2 model to predict aspects and polarities using enhanced contextual information, and to reduce the number of parameters to enable efficient learning. Experiments using a benchmark dataset of ABSA, let us show that our proposed method outperforms previous studies and significantly reduces the number of parameters.","PeriodicalId":518694,"journal":{"name":"2024 IEEE International Conference on Consumer Electronics (ICCE)","volume":"65 10","pages":"1-4"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Efficient Fine-tuning of Generative Language Model for Aspect-Based Sentiment Analysis\",\"authors\":\"Chaelyn Lee, Hanyong Lee, Kyumin Kim, Sojeong Kim, Jae-Soung Lee\",\"doi\":\"10.1109/ICCE59016.2024.10444216\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sentiment analysis is considered as an important study where be able to automatically extract the polarity of consumers or users' opinions and use it as important data for decision-making in companies or organizations. It has further developed into Aspect-Based Sentiment Analysis research that predicts the polarity for a specific aspect within a sentence. Recently, research has been conducted to convert emotion analysis based on classification work to a model that obtains more diverse and accurate emotion expressions using generative language models. We propose a method of fine-tuning by introducing Low-Rank Adaptation (LoRA) into a generative language model to improve the performance of these generative-based ABSA models and enable efficient learning. In this paper, GloABSA (GPT2+LoRA Aspect-Based Sentiment Analysis) aims at improving the learning efficiency of the previously proposed GPTABSA model. In this study, LoRA is introduced and fine-tuned to the GPT2 model to predict aspects and polarities using enhanced contextual information, and to reduce the number of parameters to enable efficient learning. Experiments using a benchmark dataset of ABSA, let us show that our proposed method outperforms previous studies and significantly reduces the number of parameters.\",\"PeriodicalId\":518694,\"journal\":{\"name\":\"2024 IEEE International Conference on Consumer Electronics (ICCE)\",\"volume\":\"65 10\",\"pages\":\"1-4\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2024 IEEE International Conference on Consumer Electronics (ICCE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCE59016.2024.10444216\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 IEEE International Conference on Consumer Electronics (ICCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCE59016.2024.10444216","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Efficient Fine-tuning of Generative Language Model for Aspect-Based Sentiment Analysis
Sentiment analysis is considered as an important study where be able to automatically extract the polarity of consumers or users' opinions and use it as important data for decision-making in companies or organizations. It has further developed into Aspect-Based Sentiment Analysis research that predicts the polarity for a specific aspect within a sentence. Recently, research has been conducted to convert emotion analysis based on classification work to a model that obtains more diverse and accurate emotion expressions using generative language models. We propose a method of fine-tuning by introducing Low-Rank Adaptation (LoRA) into a generative language model to improve the performance of these generative-based ABSA models and enable efficient learning. In this paper, GloABSA (GPT2+LoRA Aspect-Based Sentiment Analysis) aims at improving the learning efficiency of the previously proposed GPTABSA model. In this study, LoRA is introduced and fine-tuned to the GPT2 model to predict aspects and polarities using enhanced contextual information, and to reduce the number of parameters to enable efficient learning. Experiments using a benchmark dataset of ABSA, let us show that our proposed method outperforms previous studies and significantly reduces the number of parameters.