基于bert的开放域问答语义匹配排序器

Shiyi Xu, Feng Liu, Zhen Huang, Yuxing Peng, Dongsheng Li
{"title":"基于bert的开放域问答语义匹配排序器","authors":"Shiyi Xu, Feng Liu, Zhen Huang, Yuxing Peng, Dongsheng Li","doi":"10.1145/3443279.3443301","DOIUrl":null,"url":null,"abstract":"Open-domain question answering (QA) is a hot topic in recent years. Previous work has shown that an effective ranker can improve the overall QA performance by denoising irrelevant context. There are also some recent works leveraged BERT pre-trained model to tackle with open-domain QA tasks, and achieved significant improvements. Nevertheless, these BERT-based models simply concatenates a paragraph with a question, ignoring the semantic similarity of them. In this paper, we propose a simple but effective BERT-based semantic matching ranker to compute the semantic similarity between the paragraph and given question, in which three different representation aggregation functions are explored. To validate the generalized performance of our ranker, we conduct a series of experiments on two public open-domain QA datasets. Experimental results demonstrate that the proposed ranker contributes significant improvements on both the ranking and the final QA performances.","PeriodicalId":414366,"journal":{"name":"Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A BERT-Based Semantic Matching Ranker for Open-Domain Question Answering\",\"authors\":\"Shiyi Xu, Feng Liu, Zhen Huang, Yuxing Peng, Dongsheng Li\",\"doi\":\"10.1145/3443279.3443301\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Open-domain question answering (QA) is a hot topic in recent years. Previous work has shown that an effective ranker can improve the overall QA performance by denoising irrelevant context. There are also some recent works leveraged BERT pre-trained model to tackle with open-domain QA tasks, and achieved significant improvements. Nevertheless, these BERT-based models simply concatenates a paragraph with a question, ignoring the semantic similarity of them. In this paper, we propose a simple but effective BERT-based semantic matching ranker to compute the semantic similarity between the paragraph and given question, in which three different representation aggregation functions are explored. To validate the generalized performance of our ranker, we conduct a series of experiments on two public open-domain QA datasets. Experimental results demonstrate that the proposed ranker contributes significant improvements on both the ranking and the final QA performances.\",\"PeriodicalId\":414366,\"journal\":{\"name\":\"Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3443279.3443301\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Conference on Natural Language Processing and Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3443279.3443301","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

开放域问答(QA)是近年来的一个热门话题。以前的工作表明,有效的排序器可以通过去噪不相关的上下文来提高整体QA性能。最近也有一些工作利用BERT预训练模型来处理开放领域的QA任务,并取得了显著的改进。然而,这些基于bert的模型只是简单地将一个段落与一个问题连接起来,忽略了它们的语义相似性。在本文中,我们提出了一个简单而有效的基于bert的语义匹配排序器来计算段落和给定问题之间的语义相似度,其中探索了三种不同的表示聚合函数。为了验证我们的排名器的一般性能,我们在两个公共开放域QA数据集上进行了一系列实验。实验结果表明,所提出的排序方法在排序和最终QA性能方面都有显著的提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A BERT-Based Semantic Matching Ranker for Open-Domain Question Answering
Open-domain question answering (QA) is a hot topic in recent years. Previous work has shown that an effective ranker can improve the overall QA performance by denoising irrelevant context. There are also some recent works leveraged BERT pre-trained model to tackle with open-domain QA tasks, and achieved significant improvements. Nevertheless, these BERT-based models simply concatenates a paragraph with a question, ignoring the semantic similarity of them. In this paper, we propose a simple but effective BERT-based semantic matching ranker to compute the semantic similarity between the paragraph and given question, in which three different representation aggregation functions are explored. To validate the generalized performance of our ranker, we conduct a series of experiments on two public open-domain QA datasets. Experimental results demonstrate that the proposed ranker contributes significant improvements on both the ranking and the final QA performances.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信