Orion Weller, Benjamin Van Durme, Dawn Lawrie, Ashwin Paranjape, Yuhao Zhang, Jack Hessel
{"title":"Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models","authors":"Orion Weller, Benjamin Van Durme, Dawn Lawrie, Ashwin Paranjape, Yuhao Zhang, Jack Hessel","doi":"arxiv-2409.11136","DOIUrl":null,"url":null,"abstract":"Instruction-tuned language models (LM) are able to respond to imperative\ncommands, providing a more natural user interface compared to their base\ncounterparts. In this work, we present Promptriever, the first retrieval model\nable to be prompted like an LM. To train Promptriever, we curate and release a\nnew instance-level instruction training set from MS MARCO, spanning nearly 500k\ninstances. Promptriever not only achieves strong performance on standard\nretrieval tasks, but also follows instructions. We observe: (1) large gains\n(reaching SoTA) on following detailed relevance instructions (+14.3 p-MRR /\n+3.1 nDCG on FollowIR), (2) significantly increased robustness to lexical\nchoices/phrasing in the query+instruction (+12.9 Robustness@10 on InstructIR),\nand (3) the ability to perform hyperparameter search via prompting to reliably\nimprove retrieval performance (+1.4 average increase on BEIR). Promptriever\ndemonstrates that retrieval models can be controlled with prompts on a\nper-query basis, setting the stage for future work aligning LM prompting\ntechniques with information retrieval.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11136","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Instruction-tuned language models (LM) are able to respond to imperative
commands, providing a more natural user interface compared to their base
counterparts. In this work, we present Promptriever, the first retrieval model
able to be prompted like an LM. To train Promptriever, we curate and release a
new instance-level instruction training set from MS MARCO, spanning nearly 500k
instances. Promptriever not only achieves strong performance on standard
retrieval tasks, but also follows instructions. We observe: (1) large gains
(reaching SoTA) on following detailed relevance instructions (+14.3 p-MRR /
+3.1 nDCG on FollowIR), (2) significantly increased robustness to lexical
choices/phrasing in the query+instruction (+12.9 Robustness@10 on InstructIR),
and (3) the ability to perform hyperparameter search via prompting to reliably
improve retrieval performance (+1.4 average increase on BEIR). Promptriever
demonstrates that retrieval models can be controlled with prompts on a
per-query basis, setting the stage for future work aligning LM prompting
techniques with information retrieval.