Lingling Xu, Haoran Xie, S. Joe Qin, Fu Lee Wang, Xiaohui Tao
{"title":"为基于对比方面的情感分析探索基于 ChatGPT 的增强策略","authors":"Lingling Xu, Haoran Xie, S. Joe Qin, Fu Lee Wang, Xiaohui Tao","doi":"arxiv-2409.11218","DOIUrl":null,"url":null,"abstract":"Aspect-based sentiment analysis (ABSA) involves identifying sentiment towards\nspecific aspect terms in a sentence and allows us to uncover nuanced\nperspectives and attitudes on particular aspects of a product, service, or\ntopic. However, the scarcity of labeled data poses a significant challenge to\ntraining high-quality models. To address this issue, we explore the potential\nof data augmentation using ChatGPT, a well-performing large language model\n(LLM), to enhance the sentiment classification performance towards aspect\nterms. Specifically, we explore three data augmentation strategies based on\nChatGPT: context-focused, aspect-focused, and context-aspect data augmentation\ntechniques. Context-focused data augmentation focuses on changing the word\nexpression of context words in the sentence while keeping aspect terms\nunchanged. In contrast, aspect-focused data augmentation aims to change aspect\nterms but keep context words unchanged. Context-Aspect data augmentation\nintegrates the above two data augmentations to generate augmented samples.\nFurthermore, we incorporate contrastive learning into the ABSA tasks to improve\nperformance. Extensive experiments show that all three data augmentation\ntechniques lead to performance improvements, with the context-aspect data\naugmentation strategy performing best and surpassing the performance of the\nbaseline models.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring ChatGPT-based Augmentation Strategies for Contrastive Aspect-based Sentiment Analysis\",\"authors\":\"Lingling Xu, Haoran Xie, S. Joe Qin, Fu Lee Wang, Xiaohui Tao\",\"doi\":\"arxiv-2409.11218\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Aspect-based sentiment analysis (ABSA) involves identifying sentiment towards\\nspecific aspect terms in a sentence and allows us to uncover nuanced\\nperspectives and attitudes on particular aspects of a product, service, or\\ntopic. However, the scarcity of labeled data poses a significant challenge to\\ntraining high-quality models. To address this issue, we explore the potential\\nof data augmentation using ChatGPT, a well-performing large language model\\n(LLM), to enhance the sentiment classification performance towards aspect\\nterms. Specifically, we explore three data augmentation strategies based on\\nChatGPT: context-focused, aspect-focused, and context-aspect data augmentation\\ntechniques. Context-focused data augmentation focuses on changing the word\\nexpression of context words in the sentence while keeping aspect terms\\nunchanged. In contrast, aspect-focused data augmentation aims to change aspect\\nterms but keep context words unchanged. Context-Aspect data augmentation\\nintegrates the above two data augmentations to generate augmented samples.\\nFurthermore, we incorporate contrastive learning into the ABSA tasks to improve\\nperformance. Extensive experiments show that all three data augmentation\\ntechniques lead to performance improvements, with the context-aspect data\\naugmentation strategy performing best and surpassing the performance of the\\nbaseline models.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11218\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11218","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exploring ChatGPT-based Augmentation Strategies for Contrastive Aspect-based Sentiment Analysis
Aspect-based sentiment analysis (ABSA) involves identifying sentiment towards
specific aspect terms in a sentence and allows us to uncover nuanced
perspectives and attitudes on particular aspects of a product, service, or
topic. However, the scarcity of labeled data poses a significant challenge to
training high-quality models. To address this issue, we explore the potential
of data augmentation using ChatGPT, a well-performing large language model
(LLM), to enhance the sentiment classification performance towards aspect
terms. Specifically, we explore three data augmentation strategies based on
ChatGPT: context-focused, aspect-focused, and context-aspect data augmentation
techniques. Context-focused data augmentation focuses on changing the word
expression of context words in the sentence while keeping aspect terms
unchanged. In contrast, aspect-focused data augmentation aims to change aspect
terms but keep context words unchanged. Context-Aspect data augmentation
integrates the above two data augmentations to generate augmented samples.
Furthermore, we incorporate contrastive learning into the ABSA tasks to improve
performance. Extensive experiments show that all three data augmentation
techniques lead to performance improvements, with the context-aspect data
augmentation strategy performing best and surpassing the performance of the
baseline models.