{"title":"Generating Functional Responses Based on Interpretable Variable Space","authors":"H.-P. Shen, Bin Wu","doi":"10.1109/ITNEC48623.2020.9084670","DOIUrl":null,"url":null,"abstract":"How to generate responses with different sentence functions is the most challenging task in the area of dialogue systems. Conventional Sequence-to-Sequence (Seq2seq) model cannot generate sentences with different functions based on the same context by using beam search. In this paper, we design a new model to solve this problem. Our model combines an Autoencoder (AE) with a Seq2seq model by sharing the same decoder. It encodes posts and responses into different space variables and restructures the responses respectively. Besides, we introduce a latent variable to extract sentence function and Triplet loss to make variable space interpretable. The results show that our model has the ability to generate different sentences based on the target functional factor and reach a high degree of fluency and diversity in responses.","PeriodicalId":235524,"journal":{"name":"2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITNEC48623.2020.9084670","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
How to generate responses with different sentence functions is the most challenging task in the area of dialogue systems. Conventional Sequence-to-Sequence (Seq2seq) model cannot generate sentences with different functions based on the same context by using beam search. In this paper, we design a new model to solve this problem. Our model combines an Autoencoder (AE) with a Seq2seq model by sharing the same decoder. It encodes posts and responses into different space variables and restructures the responses respectively. Besides, we introduce a latent variable to extract sentence function and Triplet loss to make variable space interpretable. The results show that our model has the ability to generate different sentences based on the target functional factor and reach a high degree of fluency and diversity in responses.