Min Pan, Shuting Zhou, Teng Li, Yu Liu, Quanli Pei, Angela J. Huang, Jimmy X. Huang
{"title":"Utilizing passage-level relevance and kernel pooling for enhancing BERT-based document reranking","authors":"Min Pan, Shuting Zhou, Teng Li, Yu Liu, Quanli Pei, Angela J. Huang, Jimmy X. Huang","doi":"10.1111/coin.12656","DOIUrl":null,"url":null,"abstract":"<p>The pre-trained language model (PLM) based on the Transformer encoder, namely BERT, has achieved state-of-the-art results in the field of Information Retrieval. Existing BERT-based ranking models divide documents into passages and aggregate passage-level relevance to rank the document list. However, these common score aggregation strategies cannot capture important semantic information such as document structure and have not been extensively studied. In this article, we propose a novel kernel-based score pooling system to capture document-level relevance by aggregating passage-level relevance. In particular, we propose and study several representative kernel pooling functions and several different document ranking strategies based on passage-level relevance. Our proposed framework KnBERT naturally incorporates kernel functions from the passage level into the BERT-based re-ranking method, which provides a promising avenue for building universal retrieval-then-rerank information retrieval systems. Experiments conducted on two widely used TREC Robust04 and GOV2 test datasets show that the KnBERT has made significant improvements over other BERT-based ranking approaches in terms of MAP, P@20, and NDCG@20 indicators with no extra or even less computations.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coin.12656","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/coin.12656","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The pre-trained language model (PLM) based on the Transformer encoder, namely BERT, has achieved state-of-the-art results in the field of Information Retrieval. Existing BERT-based ranking models divide documents into passages and aggregate passage-level relevance to rank the document list. However, these common score aggregation strategies cannot capture important semantic information such as document structure and have not been extensively studied. In this article, we propose a novel kernel-based score pooling system to capture document-level relevance by aggregating passage-level relevance. In particular, we propose and study several representative kernel pooling functions and several different document ranking strategies based on passage-level relevance. Our proposed framework KnBERT naturally incorporates kernel functions from the passage level into the BERT-based re-ranking method, which provides a promising avenue for building universal retrieval-then-rerank information retrieval systems. Experiments conducted on two widely used TREC Robust04 and GOV2 test datasets show that the KnBERT has made significant improvements over other BERT-based ranking approaches in terms of MAP, P@20, and NDCG@20 indicators with no extra or even less computations.
期刊介绍:
This leading international journal promotes and stimulates research in the field of artificial intelligence (AI). Covering a wide range of issues - from the tools and languages of AI to its philosophical implications - Computational Intelligence provides a vigorous forum for the publication of both experimental and theoretical research, as well as surveys and impact studies. The journal is designed to meet the needs of a wide range of AI workers in academic and industrial research.