Chuanjun Zhao , Rong Feng , Xuzhuang Sun , Lihua Shen , Jing Gao , Yanjie Wang
{"title":"利用 BERT 驱动的上下文生成和质量过滤技术加强基于方面的情感分析","authors":"Chuanjun Zhao , Rong Feng , Xuzhuang Sun , Lihua Shen , Jing Gao , Yanjie Wang","doi":"10.1016/j.nlp.2024.100077","DOIUrl":null,"url":null,"abstract":"<div><p>Fine-grained sentiment analysis, commonly referred to as aspect-based sentiment analysis (ABSA), has garnered substantial attention in both academic and industrial circles. ABSA focuses on unveiling the sentiment orientation associated with specific entities or attributes within textual data, resulting in a more precise depiction of intricate emotional nuances. However, due to the extensive range of applications for ABSA, certain domains face challenges such as constrained dataset sizes and the absence of exhaustive, high-quality corpora, leading to issues like few-shot learning and resource scarcity scenarios. To address the issue of limited training dataset sizes, one viable approach involves the utilization of text-based context generation to expand the dataset. In this study, we amalgamate Bert-based text generation with text filtering algorithms to formulate our model. Our model fully leverages contextual information using the Bert model, with a particular emphasis on the interrelationships between sentences. This approach effectively integrates the relationships between sentences and labels, resulting in the creation of an initial data augmentation corpus. Subsequently, filtering algorithms have been devised to enhance the quality of the initial augmentation corpus by eliminating low-quality generated data, ultimately yielding the final text-enhanced dataset. Experimental findings on the Semeval-2014 Laptop and Restaurant datasets demonstrate that the enhanced dataset enhances text quality and markedly boosts the performance of models for aspect-level sentiment classification.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"7 ","pages":"Article 100077"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000256/pdfft?md5=fb2f6fcf5ed35029fd2b0d07eb4519d0&pid=1-s2.0-S2949719124000256-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Enhancing aspect-based sentiment analysis with BERT-driven context generation and quality filtering\",\"authors\":\"Chuanjun Zhao , Rong Feng , Xuzhuang Sun , Lihua Shen , Jing Gao , Yanjie Wang\",\"doi\":\"10.1016/j.nlp.2024.100077\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Fine-grained sentiment analysis, commonly referred to as aspect-based sentiment analysis (ABSA), has garnered substantial attention in both academic and industrial circles. ABSA focuses on unveiling the sentiment orientation associated with specific entities or attributes within textual data, resulting in a more precise depiction of intricate emotional nuances. However, due to the extensive range of applications for ABSA, certain domains face challenges such as constrained dataset sizes and the absence of exhaustive, high-quality corpora, leading to issues like few-shot learning and resource scarcity scenarios. To address the issue of limited training dataset sizes, one viable approach involves the utilization of text-based context generation to expand the dataset. In this study, we amalgamate Bert-based text generation with text filtering algorithms to formulate our model. Our model fully leverages contextual information using the Bert model, with a particular emphasis on the interrelationships between sentences. This approach effectively integrates the relationships between sentences and labels, resulting in the creation of an initial data augmentation corpus. Subsequently, filtering algorithms have been devised to enhance the quality of the initial augmentation corpus by eliminating low-quality generated data, ultimately yielding the final text-enhanced dataset. Experimental findings on the Semeval-2014 Laptop and Restaurant datasets demonstrate that the enhanced dataset enhances text quality and markedly boosts the performance of models for aspect-level sentiment classification.</p></div>\",\"PeriodicalId\":100944,\"journal\":{\"name\":\"Natural Language Processing Journal\",\"volume\":\"7 \",\"pages\":\"Article 100077\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2949719124000256/pdfft?md5=fb2f6fcf5ed35029fd2b0d07eb4519d0&pid=1-s2.0-S2949719124000256-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Natural Language Processing Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949719124000256\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719124000256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Enhancing aspect-based sentiment analysis with BERT-driven context generation and quality filtering
Fine-grained sentiment analysis, commonly referred to as aspect-based sentiment analysis (ABSA), has garnered substantial attention in both academic and industrial circles. ABSA focuses on unveiling the sentiment orientation associated with specific entities or attributes within textual data, resulting in a more precise depiction of intricate emotional nuances. However, due to the extensive range of applications for ABSA, certain domains face challenges such as constrained dataset sizes and the absence of exhaustive, high-quality corpora, leading to issues like few-shot learning and resource scarcity scenarios. To address the issue of limited training dataset sizes, one viable approach involves the utilization of text-based context generation to expand the dataset. In this study, we amalgamate Bert-based text generation with text filtering algorithms to formulate our model. Our model fully leverages contextual information using the Bert model, with a particular emphasis on the interrelationships between sentences. This approach effectively integrates the relationships between sentences and labels, resulting in the creation of an initial data augmentation corpus. Subsequently, filtering algorithms have been devised to enhance the quality of the initial augmentation corpus by eliminating low-quality generated data, ultimately yielding the final text-enhanced dataset. Experimental findings on the Semeval-2014 Laptop and Restaurant datasets demonstrate that the enhanced dataset enhances text quality and markedly boosts the performance of models for aspect-level sentiment classification.