{"title":"SAKP: A Korean Sentiment Analysis Model via Knowledge Base and Prompt Tuning","authors":"Haiqiang Wen, Zhenguo Zhang","doi":"10.1109/CCAI57533.2023.10201257","DOIUrl":null,"url":null,"abstract":"With the help of pre-trained language models, tasks such as sentiment analysis and text classification have achieved good results. With the advent of prompt tuning, especially previous studies have shown that in the case of few data, the prompt tuning method has significant advantages over the general tuning method of additional classifiers. At present, there are relatively few studies on sentiment analysis of Korean Chinese texts.This paper proposes a low resource sentiment classification method based on pre-trained language models (PLMs) combined with prompt tuning. In this work, we chose to use the pre-trained language model KLUE and elaborated a Korean prompt template with an expanded knowledge base and filtering in the verbalizer section. We focus on collecting external knowledge and integrating it into the utterance to form a prompt tuning of knowledge to improve and stabilize the prompt tuning. Specifically, we use the K-means clustering algorithm to construct the label wordspace of the external knowledge base (kb) extended language, and use PLM itself to refine the extended labeled wordspace before using the extended labeled wordspace for prediction. A large number of experiments on the few-shot emotion classification task prove the effectiveness of knowledge prompt tuning.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCAI57533.2023.10201257","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the help of pre-trained language models, tasks such as sentiment analysis and text classification have achieved good results. With the advent of prompt tuning, especially previous studies have shown that in the case of few data, the prompt tuning method has significant advantages over the general tuning method of additional classifiers. At present, there are relatively few studies on sentiment analysis of Korean Chinese texts.This paper proposes a low resource sentiment classification method based on pre-trained language models (PLMs) combined with prompt tuning. In this work, we chose to use the pre-trained language model KLUE and elaborated a Korean prompt template with an expanded knowledge base and filtering in the verbalizer section. We focus on collecting external knowledge and integrating it into the utterance to form a prompt tuning of knowledge to improve and stabilize the prompt tuning. Specifically, we use the K-means clustering algorithm to construct the label wordspace of the external knowledge base (kb) extended language, and use PLM itself to refine the extended labeled wordspace before using the extended labeled wordspace for prediction. A large number of experiments on the few-shot emotion classification task prove the effectiveness of knowledge prompt tuning.