Post-encoding and contrastive learning method for response selection task

Q3 Arts and Humanities
Icon Pub Date : 2023-03-01 DOI:10.1109/ICNLP58431.2023.00050
Xianwei Xue, Chunping Li, Zhilin Lu, Youshu Zhang, Shanghua Xiao
{"title":"Post-encoding and contrastive learning method for response selection task","authors":"Xianwei Xue, Chunping Li, Zhilin Lu, Youshu Zhang, Shanghua Xiao","doi":"10.1109/ICNLP58431.2023.00050","DOIUrl":null,"url":null,"abstract":"Retrieval-based dialogue systems have achieved great performance improvements after the raise of pre-trained language models and Transformer mechanisms. In the process of context and response selection, the pre-trained language model can capture the relationship between texts, but current existing methods don’t consider the order of sentences and the relationship between the context and the response. At the same time, as the problem of a small number of positive samples in retrieval-based dialogue systems, it is difficult to train a learning model with high performance. In addition, existing methods usually requires the larger computational cost after splicing the context and the response. To solve the above problems, we propose a post-encoding approach combining with the strategy of contrastive learning. The order of the context and the relationship between sentences in dialogues and response are reflected in the encoding process, and a new loss function is designed for contrastive learning. The propose approach is validated through experiments on public datasets. The experiment results show that our model achieves better performance and effectiveness compared to existing methods.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Icon","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNLP58431.2023.00050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0

Abstract

Retrieval-based dialogue systems have achieved great performance improvements after the raise of pre-trained language models and Transformer mechanisms. In the process of context and response selection, the pre-trained language model can capture the relationship between texts, but current existing methods don’t consider the order of sentences and the relationship between the context and the response. At the same time, as the problem of a small number of positive samples in retrieval-based dialogue systems, it is difficult to train a learning model with high performance. In addition, existing methods usually requires the larger computational cost after splicing the context and the response. To solve the above problems, we propose a post-encoding approach combining with the strategy of contrastive learning. The order of the context and the relationship between sentences in dialogues and response are reflected in the encoding process, and a new loss function is designed for contrastive learning. The propose approach is validated through experiments on public datasets. The experiment results show that our model achieves better performance and effectiveness compared to existing methods.
反应选择任务的后编码与对比学习方法
在提出预训练语言模型和Transformer机制后,基于检索的对话系统取得了很大的性能改进。在语境和反应选择过程中,预训练的语言模型可以捕捉到文本之间的关系,但目前的方法没有考虑句子的顺序以及语境和反应之间的关系。同时,由于基于检索的对话系统中正样本数量少的问题,很难训练出高性能的学习模型。此外,现有的方法在拼接上下文和响应后通常需要较大的计算开销。为了解决上述问题,我们提出了一种结合对比学习策略的后编码方法。在编码过程中反映了语境的顺序以及对话和反应中句子之间的关系,并设计了一种新的损失函数用于对比学习。通过公共数据集的实验验证了该方法的有效性。实验结果表明,与现有方法相比,我们的模型具有更好的性能和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Icon
Icon Arts and Humanities-History and Philosophy of Science
CiteScore
0.30
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信