{"title":"基于顺序的多轮对话任务研究","authors":"Yingrui Pang, Zhenni Gong, Zixuan Zhao, Yanyan Xu, Dengfeng Ke, Kaile Su","doi":"10.1109/icccs55155.2022.9846540","DOIUrl":null,"url":null,"abstract":"Multi-round dialogue is one of the most practical techniques in natural language processing. The current multi-round dialogue systems generally suffer from contextual information loss and lack of diversity in generated answers. Therefore, we propose a model based on Sequicity. We use the gate recurrent unit (GRU) to encode the current question, the response of the previous sentence and the semantic slot information. Then, encoding results are fed into a context encoder to generate context information. During the training procedure, the results of the two encoders are input into the recognition network, and the latent variables are sampled from the recognition network; During the test procedure, the concatenating results are input into the prior network, and the latent variables are sampled from the prior network. The latent variables, the encoding results of the current question and the above semantic slot information are concatenated and input to the response decoder. Finally, the decoder employs the Softmax function for decoding. On both CamRest and KVRET public datasets, the proposed model achieves the best results. Compared with the baseline Sequicity, which had the best results before on CamRest, the model's Success F1 is relatively improved by 3.4%, BLEU by 9.6% and Entity match rate by 3.3%. On the KVRET dataset, Success F1 is relatively improved by 2.0%, BLEU by 1.5% and Entity Match Rate by 2.8%.","PeriodicalId":121713,"journal":{"name":"2022 7th International Conference on Computer and Communication Systems (ICCCS)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research on Multi-round Dialogue Tasks Based on Sequicity\",\"authors\":\"Yingrui Pang, Zhenni Gong, Zixuan Zhao, Yanyan Xu, Dengfeng Ke, Kaile Su\",\"doi\":\"10.1109/icccs55155.2022.9846540\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-round dialogue is one of the most practical techniques in natural language processing. The current multi-round dialogue systems generally suffer from contextual information loss and lack of diversity in generated answers. Therefore, we propose a model based on Sequicity. We use the gate recurrent unit (GRU) to encode the current question, the response of the previous sentence and the semantic slot information. Then, encoding results are fed into a context encoder to generate context information. During the training procedure, the results of the two encoders are input into the recognition network, and the latent variables are sampled from the recognition network; During the test procedure, the concatenating results are input into the prior network, and the latent variables are sampled from the prior network. The latent variables, the encoding results of the current question and the above semantic slot information are concatenated and input to the response decoder. Finally, the decoder employs the Softmax function for decoding. On both CamRest and KVRET public datasets, the proposed model achieves the best results. Compared with the baseline Sequicity, which had the best results before on CamRest, the model's Success F1 is relatively improved by 3.4%, BLEU by 9.6% and Entity match rate by 3.3%. On the KVRET dataset, Success F1 is relatively improved by 2.0%, BLEU by 1.5% and Entity Match Rate by 2.8%.\",\"PeriodicalId\":121713,\"journal\":{\"name\":\"2022 7th International Conference on Computer and Communication Systems (ICCCS)\",\"volume\":\"121 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 7th International Conference on Computer and Communication Systems (ICCCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icccs55155.2022.9846540\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Computer and Communication Systems (ICCCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icccs55155.2022.9846540","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Research on Multi-round Dialogue Tasks Based on Sequicity
Multi-round dialogue is one of the most practical techniques in natural language processing. The current multi-round dialogue systems generally suffer from contextual information loss and lack of diversity in generated answers. Therefore, we propose a model based on Sequicity. We use the gate recurrent unit (GRU) to encode the current question, the response of the previous sentence and the semantic slot information. Then, encoding results are fed into a context encoder to generate context information. During the training procedure, the results of the two encoders are input into the recognition network, and the latent variables are sampled from the recognition network; During the test procedure, the concatenating results are input into the prior network, and the latent variables are sampled from the prior network. The latent variables, the encoding results of the current question and the above semantic slot information are concatenated and input to the response decoder. Finally, the decoder employs the Softmax function for decoding. On both CamRest and KVRET public datasets, the proposed model achieves the best results. Compared with the baseline Sequicity, which had the best results before on CamRest, the model's Success F1 is relatively improved by 3.4%, BLEU by 9.6% and Entity match rate by 3.3%. On the KVRET dataset, Success F1 is relatively improved by 2.0%, BLEU by 1.5% and Entity Match Rate by 2.8%.