{"title":"ID-SF-Fusion:用于自然语言理解的意图检测和槽填充合作模型","authors":"Meng Zhu, Xiaolong Xu","doi":"10.1108/dta-03-2023-0088","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>Intent detection (ID) and slot filling (SF) are two important tasks in natural language understanding. ID is to identify the main intent of a paragraph of text. The goal of SF is to extract the information that is important to the intent from the input sentence. However, most of the existing methods use sentence-level intention recognition, which has the risk of error propagation, and the relationship between intention recognition and SF is not explicitly modeled. Aiming at this problem, this paper proposes a collaborative model of ID and SF for intelligent spoken language understanding called ID-SF-Fusion.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>ID-SF-Fusion uses Bidirectional Encoder Representation from Transformers (BERT) and Bidirectional Long Short-Term Memory (BiLSTM) to extract effective word embedding and context vectors containing the whole sentence information respectively. Fusion layer is used to provide intent–slot fusion information for SF task. In this way, the relationship between ID and SF task is fully explicitly modeled. This layer takes the result of ID and slot context vectors as input to obtain the fusion information which contains both ID result and slot information. Meanwhile, to further reduce error propagation, we use word-level ID for the ID-SF-Fusion model. Finally, two tasks of ID and SF are realized by joint optimization training.</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>We conducted experiments on two public datasets, Airline Travel Information Systems (ATIS) and Snips. The results show that the Intent ACC score and Slot F1 score of ID-SF-Fusion on ATIS and Snips are 98.0 per cent and 95.8 per cent, respectively, and the two indicators on Snips dataset are 98.6 per cent and 96.7 per cent, respectively. These models are superior to slot-gated, SF-ID NetWork, stack-Prop and other models. In addition, ablation experiments were performed to further analyze and discuss the proposed model.</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>This paper uses word-level intent recognition and introduces intent information into the SF process, which is a significant improvement on both data sets.</p><!--/ Abstract__block -->","PeriodicalId":56156,"journal":{"name":"Data Technologies and Applications","volume":"18 1","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ID-SF-Fusion: a cooperative model of intent detection and slot filling for natural language understanding\",\"authors\":\"Meng Zhu, Xiaolong Xu\",\"doi\":\"10.1108/dta-03-2023-0088\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3>Purpose</h3>\\n<p>Intent detection (ID) and slot filling (SF) are two important tasks in natural language understanding. ID is to identify the main intent of a paragraph of text. The goal of SF is to extract the information that is important to the intent from the input sentence. However, most of the existing methods use sentence-level intention recognition, which has the risk of error propagation, and the relationship between intention recognition and SF is not explicitly modeled. Aiming at this problem, this paper proposes a collaborative model of ID and SF for intelligent spoken language understanding called ID-SF-Fusion.</p><!--/ Abstract__block -->\\n<h3>Design/methodology/approach</h3>\\n<p>ID-SF-Fusion uses Bidirectional Encoder Representation from Transformers (BERT) and Bidirectional Long Short-Term Memory (BiLSTM) to extract effective word embedding and context vectors containing the whole sentence information respectively. Fusion layer is used to provide intent–slot fusion information for SF task. In this way, the relationship between ID and SF task is fully explicitly modeled. This layer takes the result of ID and slot context vectors as input to obtain the fusion information which contains both ID result and slot information. Meanwhile, to further reduce error propagation, we use word-level ID for the ID-SF-Fusion model. Finally, two tasks of ID and SF are realized by joint optimization training.</p><!--/ Abstract__block -->\\n<h3>Findings</h3>\\n<p>We conducted experiments on two public datasets, Airline Travel Information Systems (ATIS) and Snips. The results show that the Intent ACC score and Slot F1 score of ID-SF-Fusion on ATIS and Snips are 98.0 per cent and 95.8 per cent, respectively, and the two indicators on Snips dataset are 98.6 per cent and 96.7 per cent, respectively. These models are superior to slot-gated, SF-ID NetWork, stack-Prop and other models. In addition, ablation experiments were performed to further analyze and discuss the proposed model.</p><!--/ Abstract__block -->\\n<h3>Originality/value</h3>\\n<p>This paper uses word-level intent recognition and introduces intent information into the SF process, which is a significant improvement on both data sets.</p><!--/ Abstract__block -->\",\"PeriodicalId\":56156,\"journal\":{\"name\":\"Data Technologies and Applications\",\"volume\":\"18 1\",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-01-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data Technologies and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1108/dta-03-2023-0088\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data Technologies and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/dta-03-2023-0088","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
目的意图检测(ID)和槽填充(SF)是自然语言理解中的两项重要任务。意图检测的目的是识别一段文本的主要意图。槽填充(SF)的目标是从输入句子中提取对意图重要的信息。然而,现有的方法大多使用句子级的意图识别,这存在错误传播的风险,而且意图识别和 SF 之间的关系没有明确的模型。针对这一问题,本文提出了一种用于智能口语理解的 ID 和 SF 协作模型,称为 ID-SF-Fusion。设计/方法/途径ID-SF-Fusion 使用双向变压器编码器表示法(BERT)和双向长短期记忆法(BiLSTM)分别提取有效的词嵌入和包含整句信息的上下文向量。融合层用于为 SF 任务提供意图-槽融合信息。通过这种方式,ID 和 SF 任务之间的关系被完全明确地建模出来。该层将 ID 和时隙上下文向量的结果作为输入,以获得包含 ID 结果和时隙信息的融合信息。同时,为了进一步减少错误传播,我们在 ID-SF-Fusion 模型中使用了词级 ID。最后,通过联合优化训练实现 ID 和 SF 两项任务。结果表明,ID-SF-Fusion 在 ATIS 和 Snips 数据集上的 Intent ACC 得分和 Slot F1 得分分别为 98.0% 和 95.8%,在 Snips 数据集上的这两项指标分别为 98.6% 和 96.7%。这些模型优于 slot-gated、SF-ID NetWork、stack-Prop 和其他模型。此外,还进行了消融实验,对提出的模型进行了进一步的分析和讨论。原创性/价值本文采用词级意图识别,将意图信息引入 SF 流程,在两个数据集上都有显著改进。
ID-SF-Fusion: a cooperative model of intent detection and slot filling for natural language understanding
Purpose
Intent detection (ID) and slot filling (SF) are two important tasks in natural language understanding. ID is to identify the main intent of a paragraph of text. The goal of SF is to extract the information that is important to the intent from the input sentence. However, most of the existing methods use sentence-level intention recognition, which has the risk of error propagation, and the relationship between intention recognition and SF is not explicitly modeled. Aiming at this problem, this paper proposes a collaborative model of ID and SF for intelligent spoken language understanding called ID-SF-Fusion.
Design/methodology/approach
ID-SF-Fusion uses Bidirectional Encoder Representation from Transformers (BERT) and Bidirectional Long Short-Term Memory (BiLSTM) to extract effective word embedding and context vectors containing the whole sentence information respectively. Fusion layer is used to provide intent–slot fusion information for SF task. In this way, the relationship between ID and SF task is fully explicitly modeled. This layer takes the result of ID and slot context vectors as input to obtain the fusion information which contains both ID result and slot information. Meanwhile, to further reduce error propagation, we use word-level ID for the ID-SF-Fusion model. Finally, two tasks of ID and SF are realized by joint optimization training.
Findings
We conducted experiments on two public datasets, Airline Travel Information Systems (ATIS) and Snips. The results show that the Intent ACC score and Slot F1 score of ID-SF-Fusion on ATIS and Snips are 98.0 per cent and 95.8 per cent, respectively, and the two indicators on Snips dataset are 98.6 per cent and 96.7 per cent, respectively. These models are superior to slot-gated, SF-ID NetWork, stack-Prop and other models. In addition, ablation experiments were performed to further analyze and discuss the proposed model.
Originality/value
This paper uses word-level intent recognition and introduces intent information into the SF process, which is a significant improvement on both data sets.