{"title":"A Noise-enhanced Fuse Model for Passage Ranking","authors":"Yizheng Huang","doi":"10.1109/WI-IAT55865.2022.00118","DOIUrl":null,"url":null,"abstract":"Since the rapid progress in deep learning in recent years, many language models have achieved significant results in various information retrieval (IR) tasks. Passage ranking plays a vital role in this field, and the neural network models significantly outperform the traditional method. However, fine-tuning the pre-trained model to the downstream task may be influenced by the fact that there are differences between the two tasks. And traditional methods also have their advantages. In some cases, the performance of BM25 is obviously better than the deep learning model. This paper discusses the results of the deep learning model linearly combining with BM25 and adds noise to the model for enhancing the finetune performance. We conduct experiments on the MS MARCO dataset to show convincing results.","PeriodicalId":345445,"journal":{"name":"2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WI-IAT55865.2022.00118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Since the rapid progress in deep learning in recent years, many language models have achieved significant results in various information retrieval (IR) tasks. Passage ranking plays a vital role in this field, and the neural network models significantly outperform the traditional method. However, fine-tuning the pre-trained model to the downstream task may be influenced by the fact that there are differences between the two tasks. And traditional methods also have their advantages. In some cases, the performance of BM25 is obviously better than the deep learning model. This paper discusses the results of the deep learning model linearly combining with BM25 and adds noise to the model for enhancing the finetune performance. We conduct experiments on the MS MARCO dataset to show convincing results.