{"title":"A Natural Language Understanding Model Based on Encoding Fusion For Power Marketing Indicator Answering","authors":"Shiyu Xu, Hui Song, Renchang Wu, Junwei Shi","doi":"10.1109/epce58798.2023.00011","DOIUrl":null,"url":null,"abstract":"Accurate understanding of user questions is the core of a domain oriented task oriented dialogue system. To apply the Natural Language Understanding Model (NLU) to power marketing indicator Q&A, the first is to define the NLU task schema based on domain background knowledge, and manually annotate a training dataset for model training. Due to the lack of historical conversation data, manually organizing problem and annotating is labor-intensive. Insufficient sample size affects the performance of the model. We further propose an approach to improve the end-to-end NLU model with marketing domain triple knowledge, which provide rich contextual information for the slot representation. During the NLU model coding stage, the representation of entity relationships is incorporated into the token coding, enhancing the model's understanding of domain terms that do not appear in the samples. Practice has shown that introducing domain knowledge do make up for the lack of training samples and significantly improve the accuracy of slot value recognition.","PeriodicalId":355442,"journal":{"name":"2023 2nd Asia Conference on Electrical, Power and Computer Engineering (EPCE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 2nd Asia Conference on Electrical, Power and Computer Engineering (EPCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/epce58798.2023.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Accurate understanding of user questions is the core of a domain oriented task oriented dialogue system. To apply the Natural Language Understanding Model (NLU) to power marketing indicator Q&A, the first is to define the NLU task schema based on domain background knowledge, and manually annotate a training dataset for model training. Due to the lack of historical conversation data, manually organizing problem and annotating is labor-intensive. Insufficient sample size affects the performance of the model. We further propose an approach to improve the end-to-end NLU model with marketing domain triple knowledge, which provide rich contextual information for the slot representation. During the NLU model coding stage, the representation of entity relationships is incorporated into the token coding, enhancing the model's understanding of domain terms that do not appear in the samples. Practice has shown that introducing domain knowledge do make up for the lack of training samples and significantly improve the accuracy of slot value recognition.