{"title":"Cross-Domain Aspect-based Sentiment Classification with Pre-Training and Fine-Tuning Strategy for Low-Resource Domains","authors":"Chunjun Zhao, Meiling Wu, Xinyi Yang, Xuzhuang Sun, Suge Wang, Deyu Li","doi":"10.1145/3653299","DOIUrl":null,"url":null,"abstract":"<p>Aspect-based sentiment classification (ABSC) is a crucial subtask of fine-grained sentiment analysis (SA), which aims to predict the sentiment polarity of the given aspects in a sentence as positive, negative, or neutral. Most existing ABSC methods based on supervised learning. However, these methods rely heavily on fine-grained labeled training data, which can be scarce in low-resource domains, limiting their effectiveness. To overcome this challenge, we propose a low-resource cross-domain aspect-based sentiment classification (CDABSC) approach based on a pre-training and fine-tuning strategy. This approach applies the pre-training and fine-tuning strategy to an advanced deep learning method designed for ABSC, namely the attention-based encoding graph convolutional network (AEGCN) model. Specifically, a high-resource domain is selected as the source domain, and the AEGCN model is pre-trained using a large amount of fine-grained annotated data from the source domain. The optimal parameters of the model are preserved. Subsequently, a low-resource domain is used as the target domain, and the pre-trained model parameters are used as the initial parameters of the target domain model. The target domain is fine-tuned using a small amount of annotated data to adapt the parameters to the target domain model, improving the accuracy of sentiment classification in the low-resource domain. Finally, experimental validation on two domain benchmark datasets, restaurant and laptop, demonstrates that significant outperformance of our approach over the baselines in CDABSC Micro-F1.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"136 1","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Asian and Low-Resource Language Information Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3653299","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Aspect-based sentiment classification (ABSC) is a crucial subtask of fine-grained sentiment analysis (SA), which aims to predict the sentiment polarity of the given aspects in a sentence as positive, negative, or neutral. Most existing ABSC methods based on supervised learning. However, these methods rely heavily on fine-grained labeled training data, which can be scarce in low-resource domains, limiting their effectiveness. To overcome this challenge, we propose a low-resource cross-domain aspect-based sentiment classification (CDABSC) approach based on a pre-training and fine-tuning strategy. This approach applies the pre-training and fine-tuning strategy to an advanced deep learning method designed for ABSC, namely the attention-based encoding graph convolutional network (AEGCN) model. Specifically, a high-resource domain is selected as the source domain, and the AEGCN model is pre-trained using a large amount of fine-grained annotated data from the source domain. The optimal parameters of the model are preserved. Subsequently, a low-resource domain is used as the target domain, and the pre-trained model parameters are used as the initial parameters of the target domain model. The target domain is fine-tuned using a small amount of annotated data to adapt the parameters to the target domain model, improving the accuracy of sentiment classification in the low-resource domain. Finally, experimental validation on two domain benchmark datasets, restaurant and laptop, demonstrates that significant outperformance of our approach over the baselines in CDABSC Micro-F1.
期刊介绍:
The ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) publishes high quality original archival papers and technical notes in the areas of computation and processing of information in Asian languages, low-resource languages of Africa, Australasia, Oceania and the Americas, as well as related disciplines. The subject areas covered by TALLIP include, but are not limited to:
-Computational Linguistics: including computational phonology, computational morphology, computational syntax (e.g. parsing), computational semantics, computational pragmatics, etc.
-Linguistic Resources: including computational lexicography, terminology, electronic dictionaries, cross-lingual dictionaries, electronic thesauri, etc.
-Hardware and software algorithms and tools for Asian or low-resource language processing, e.g., handwritten character recognition.
-Information Understanding: including text understanding, speech understanding, character recognition, discourse processing, dialogue systems, etc.
-Machine Translation involving Asian or low-resource languages.
-Information Retrieval: including natural language processing (NLP) for concept-based indexing, natural language query interfaces, semantic relevance judgments, etc.
-Information Extraction and Filtering: including automatic abstraction, user profiling, etc.
-Speech processing: including text-to-speech synthesis and automatic speech recognition.
-Multimedia Asian Information Processing: including speech, image, video, image/text translation, etc.
-Cross-lingual information processing involving Asian or low-resource languages.
-Papers that deal in theory, systems design, evaluation and applications in the aforesaid subjects are appropriate for TALLIP. Emphasis will be placed on the originality and the practical significance of the reported research.