{"title":"多回合响应生成的最后话语-语境注意模型","authors":"Guodong Zhang, Li-ting Mao, Jun Sun","doi":"10.1109/DCABES50732.2020.00057","DOIUrl":null,"url":null,"abstract":"Recently, conversation response generation task is attracting the attention of more and more researchers. Different from single-turn response generation, multi-turn response generation not only focuses on fluency, but also needs to make use of contextual information. Therefore, we believe that an appropriate response should be coherent to the last utterance, and take conversation history into consideration at the same time. We propose a Last Utterance-Context Attention model. The last utterance attention calculates each word in last utterance and form them as a vector. Representation of each utterance is processed by the context attention and formed as a vector as well. Then the two vectors are concatenated as a context vector for decoding the response. In addition, we also apply the multi-head self-attention mechanism to focus more on the key words in each utterance. Both automatic and human evaluation results show that our model outperform baseline models for multi-turn response generation.","PeriodicalId":351404,"journal":{"name":"2020 19th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Last Utterance-Context Attention Model for Multi-Turn Response Generation\",\"authors\":\"Guodong Zhang, Li-ting Mao, Jun Sun\",\"doi\":\"10.1109/DCABES50732.2020.00057\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, conversation response generation task is attracting the attention of more and more researchers. Different from single-turn response generation, multi-turn response generation not only focuses on fluency, but also needs to make use of contextual information. Therefore, we believe that an appropriate response should be coherent to the last utterance, and take conversation history into consideration at the same time. We propose a Last Utterance-Context Attention model. The last utterance attention calculates each word in last utterance and form them as a vector. Representation of each utterance is processed by the context attention and formed as a vector as well. Then the two vectors are concatenated as a context vector for decoding the response. In addition, we also apply the multi-head self-attention mechanism to focus more on the key words in each utterance. Both automatic and human evaluation results show that our model outperform baseline models for multi-turn response generation.\",\"PeriodicalId\":351404,\"journal\":{\"name\":\"2020 19th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 19th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DCABES50732.2020.00057\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 19th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCABES50732.2020.00057","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Last Utterance-Context Attention Model for Multi-Turn Response Generation
Recently, conversation response generation task is attracting the attention of more and more researchers. Different from single-turn response generation, multi-turn response generation not only focuses on fluency, but also needs to make use of contextual information. Therefore, we believe that an appropriate response should be coherent to the last utterance, and take conversation history into consideration at the same time. We propose a Last Utterance-Context Attention model. The last utterance attention calculates each word in last utterance and form them as a vector. Representation of each utterance is processed by the context attention and formed as a vector as well. Then the two vectors are concatenated as a context vector for decoding the response. In addition, we also apply the multi-head self-attention mechanism to focus more on the key words in each utterance. Both automatic and human evaluation results show that our model outperform baseline models for multi-turn response generation.