S. Keshvari, Farzan Saeedi, Hadi Sadoghi Yazdi, F. Ensan
{"title":"A Self-Distilled Learning to Rank Model for Ad-hoc Retrieval","authors":"S. Keshvari, Farzan Saeedi, Hadi Sadoghi Yazdi, F. Ensan","doi":"10.1145/3681784","DOIUrl":null,"url":null,"abstract":"Learning to rank models are broadly applied in ad-hoc retrieval for scoring and sorting documents based on their relevance to textual queries. The generalizability of the trained model in the learning to rank approach, however, can have an impact on the retrieval performance, particularly when data includes noise and outliers, or is incorrectly collected or measured. In this paper, we introduce a Self-Distilled Learning to Rank (SDLR) framework for ad-hoc retrieval, and analyze its performance over a range of retrieval datasets and also in the presence of features’ noise. SDLR assigns a confidence weight to each training sample, aiming at reducing the impact of noisy and outlier data in the training process. The confidence wight is approximated based on the feature’s distributions derived from the values observed for the features of the documents labeled for a query in a listwise training sample. SDLR includes a distillation process that facilitates passing on the underlying patterns in assigning confidence weights from the teacher model to the student one. We empirically illustrate that SDLR outperforms state-of-the-art learning to rank models in ad-hoc retrieval. We thoroughly investigate the SDLR performance in different settings including when no distillation strategy is applied; when different portion of data is used for training the teacher and the student models, and when both teacher and student models are trained over identical data. We show that SDLR is more effective when training data is split between a teacher and a student model. We also show that SDLR’s performance is robust when data features are noisy.","PeriodicalId":5,"journal":{"name":"ACS Applied Materials & Interfaces","volume":"19 8","pages":""},"PeriodicalIF":8.3000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Materials & Interfaces","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3681784","RegionNum":2,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATERIALS SCIENCE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Learning to rank models are broadly applied in ad-hoc retrieval for scoring and sorting documents based on their relevance to textual queries. The generalizability of the trained model in the learning to rank approach, however, can have an impact on the retrieval performance, particularly when data includes noise and outliers, or is incorrectly collected or measured. In this paper, we introduce a Self-Distilled Learning to Rank (SDLR) framework for ad-hoc retrieval, and analyze its performance over a range of retrieval datasets and also in the presence of features’ noise. SDLR assigns a confidence weight to each training sample, aiming at reducing the impact of noisy and outlier data in the training process. The confidence wight is approximated based on the feature’s distributions derived from the values observed for the features of the documents labeled for a query in a listwise training sample. SDLR includes a distillation process that facilitates passing on the underlying patterns in assigning confidence weights from the teacher model to the student one. We empirically illustrate that SDLR outperforms state-of-the-art learning to rank models in ad-hoc retrieval. We thoroughly investigate the SDLR performance in different settings including when no distillation strategy is applied; when different portion of data is used for training the teacher and the student models, and when both teacher and student models are trained over identical data. We show that SDLR is more effective when training data is split between a teacher and a student model. We also show that SDLR’s performance is robust when data features are noisy.
期刊介绍:
ACS Applied Materials & Interfaces is a leading interdisciplinary journal that brings together chemists, engineers, physicists, and biologists to explore the development and utilization of newly-discovered materials and interfacial processes for specific applications. Our journal has experienced remarkable growth since its establishment in 2009, both in terms of the number of articles published and the impact of the research showcased. We are proud to foster a truly global community, with the majority of published articles originating from outside the United States, reflecting the rapid growth of applied research worldwide.