A Natural Language Understanding Model Based on Encoding Fusion For Power Marketing Indicator Answering

Shiyu Xu, Hui Song, Renchang Wu, Junwei Shi
{"title":"A Natural Language Understanding Model Based on Encoding Fusion For Power Marketing Indicator Answering","authors":"Shiyu Xu, Hui Song, Renchang Wu, Junwei Shi","doi":"10.1109/epce58798.2023.00011","DOIUrl":null,"url":null,"abstract":"Accurate understanding of user questions is the core of a domain oriented task oriented dialogue system. To apply the Natural Language Understanding Model (NLU) to power marketing indicator Q&A, the first is to define the NLU task schema based on domain background knowledge, and manually annotate a training dataset for model training. Due to the lack of historical conversation data, manually organizing problem and annotating is labor-intensive. Insufficient sample size affects the performance of the model. We further propose an approach to improve the end-to-end NLU model with marketing domain triple knowledge, which provide rich contextual information for the slot representation. During the NLU model coding stage, the representation of entity relationships is incorporated into the token coding, enhancing the model's understanding of domain terms that do not appear in the samples. Practice has shown that introducing domain knowledge do make up for the lack of training samples and significantly improve the accuracy of slot value recognition.","PeriodicalId":355442,"journal":{"name":"2023 2nd Asia Conference on Electrical, Power and Computer Engineering (EPCE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 2nd Asia Conference on Electrical, Power and Computer Engineering (EPCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/epce58798.2023.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Accurate understanding of user questions is the core of a domain oriented task oriented dialogue system. To apply the Natural Language Understanding Model (NLU) to power marketing indicator Q&A, the first is to define the NLU task schema based on domain background knowledge, and manually annotate a training dataset for model training. Due to the lack of historical conversation data, manually organizing problem and annotating is labor-intensive. Insufficient sample size affects the performance of the model. We further propose an approach to improve the end-to-end NLU model with marketing domain triple knowledge, which provide rich contextual information for the slot representation. During the NLU model coding stage, the representation of entity relationships is incorporated into the token coding, enhancing the model's understanding of domain terms that do not appear in the samples. Practice has shown that introducing domain knowledge do make up for the lack of training samples and significantly improve the accuracy of slot value recognition.
基于编码融合的电力营销指标应答自然语言理解模型
对用户问题的准确理解是面向领域、面向任务的对话系统的核心。为了将自然语言理解模型(NLU)应用于动力营销指标问答,首先定义基于领域背景知识的NLU任务模式,并手动标注训练数据集进行模型训练。由于缺乏历史会话数据,手工组织问题和标注是一项劳动密集型的工作。样本量不足会影响模型的性能。我们进一步提出了一种利用营销领域三重知识改进端到端NLU模型的方法,该方法为槽表示提供了丰富的上下文信息。在NLU模型编码阶段,实体关系的表示被合并到令牌编码中,增强了模型对样本中未出现的领域术语的理解。实践表明,引入领域知识确实弥补了训练样本的不足,显著提高了槽值识别的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信