Weijie Bian, Si Li, Zhao Yang, Guang Chen, Zhiqing Lin
{"title":"一种具有动态剪辑注意力的答案选择比较-聚合模型","authors":"Weijie Bian, Si Li, Zhao Yang, Guang Chen, Zhiqing Lin","doi":"10.1145/3132847.3133089","DOIUrl":null,"url":null,"abstract":"Answer selection for question answering is a challenging task, since it requires effective capture of the complex semantic relations between questions and answers. Previous remarkable approaches mainly adopt general Compare-Aggregate framework that performs word-level comparison and aggregation. In this paper, unlike previous Compare-Aggregate models which utilize the traditional attention mechanism to generate corresponding word-level vector before comparison, we propose a novel attention mechanism named Dynamic-Clip Attention which is directly integrated into the Compare-Aggregate framework. Dynamic-Clip Attention focuses on filtering out noise in attention matrix, in order to better mine the semantic relevance of word-level vectors. At the same time, different from previous Compare-Aggregate works which treat answer selection task as a pointwise classification problem, we propose a listwise ranking approach to model this task to learn the relative order of candidate answers. Experiments on TrecQA and WikiQA datasets show that our proposed model achieves the state-of-the-art performance.","PeriodicalId":20449,"journal":{"name":"Proceedings of the 2017 ACM on Conference on Information and Knowledge Management","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2017-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"79","resultStr":"{\"title\":\"A Compare-Aggregate Model with Dynamic-Clip Attention for Answer Selection\",\"authors\":\"Weijie Bian, Si Li, Zhao Yang, Guang Chen, Zhiqing Lin\",\"doi\":\"10.1145/3132847.3133089\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Answer selection for question answering is a challenging task, since it requires effective capture of the complex semantic relations between questions and answers. Previous remarkable approaches mainly adopt general Compare-Aggregate framework that performs word-level comparison and aggregation. In this paper, unlike previous Compare-Aggregate models which utilize the traditional attention mechanism to generate corresponding word-level vector before comparison, we propose a novel attention mechanism named Dynamic-Clip Attention which is directly integrated into the Compare-Aggregate framework. Dynamic-Clip Attention focuses on filtering out noise in attention matrix, in order to better mine the semantic relevance of word-level vectors. At the same time, different from previous Compare-Aggregate works which treat answer selection task as a pointwise classification problem, we propose a listwise ranking approach to model this task to learn the relative order of candidate answers. Experiments on TrecQA and WikiQA datasets show that our proposed model achieves the state-of-the-art performance.\",\"PeriodicalId\":20449,\"journal\":{\"name\":\"Proceedings of the 2017 ACM on Conference on Information and Knowledge Management\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"79\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2017 ACM on Conference on Information and Knowledge Management\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3132847.3133089\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2017 ACM on Conference on Information and Knowledge Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3132847.3133089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Compare-Aggregate Model with Dynamic-Clip Attention for Answer Selection
Answer selection for question answering is a challenging task, since it requires effective capture of the complex semantic relations between questions and answers. Previous remarkable approaches mainly adopt general Compare-Aggregate framework that performs word-level comparison and aggregation. In this paper, unlike previous Compare-Aggregate models which utilize the traditional attention mechanism to generate corresponding word-level vector before comparison, we propose a novel attention mechanism named Dynamic-Clip Attention which is directly integrated into the Compare-Aggregate framework. Dynamic-Clip Attention focuses on filtering out noise in attention matrix, in order to better mine the semantic relevance of word-level vectors. At the same time, different from previous Compare-Aggregate works which treat answer selection task as a pointwise classification problem, we propose a listwise ranking approach to model this task to learn the relative order of candidate answers. Experiments on TrecQA and WikiQA datasets show that our proposed model achieves the state-of-the-art performance.