{"title":"利用公理扰动指导神经排序模型","authors":"Zitong Cheng, Hui Fang","doi":"10.1145/3409256.3409828","DOIUrl":null,"url":null,"abstract":"Axiomatic approaches aim to utilize reasonable retrieval constraints to guide the search for optimal retrieval models. Existing studies have shown the effectiveness of axiomatic approaches in improving the performance through either the derivation of new basic retrieval models or modifications of existing ones. Recently, neural network models have attracted more attention in the research community. Since these models are learned from training data, it would be interesting to study how to utilize the axiomatic approaches to guide the training process so that the learned models can satisfy retrieval constraints and achieve better retrieval performance. In this paper, we propose to utilize axiomatic perturbations to construct training data sets for neural ranking models. The perturbed data sets are constructed in a way to amplify the desirable properties that any reasonable retrieval models should satisfy. As a result, the models learned from the perturbed data sets are expected to satisfy more retrieval constraints and lead to better retrieval performance. Experiment results show that the models learned from the perturbed data sets indeed perform better than those learned from the original data sets.","PeriodicalId":430907,"journal":{"name":"Proceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval","volume":"23 9","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Utilizing Axiomatic Perturbations to Guide Neural Ranking Models\",\"authors\":\"Zitong Cheng, Hui Fang\",\"doi\":\"10.1145/3409256.3409828\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Axiomatic approaches aim to utilize reasonable retrieval constraints to guide the search for optimal retrieval models. Existing studies have shown the effectiveness of axiomatic approaches in improving the performance through either the derivation of new basic retrieval models or modifications of existing ones. Recently, neural network models have attracted more attention in the research community. Since these models are learned from training data, it would be interesting to study how to utilize the axiomatic approaches to guide the training process so that the learned models can satisfy retrieval constraints and achieve better retrieval performance. In this paper, we propose to utilize axiomatic perturbations to construct training data sets for neural ranking models. The perturbed data sets are constructed in a way to amplify the desirable properties that any reasonable retrieval models should satisfy. As a result, the models learned from the perturbed data sets are expected to satisfy more retrieval constraints and lead to better retrieval performance. Experiment results show that the models learned from the perturbed data sets indeed perform better than those learned from the original data sets.\",\"PeriodicalId\":430907,\"journal\":{\"name\":\"Proceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval\",\"volume\":\"23 9\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3409256.3409828\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3409256.3409828","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Utilizing Axiomatic Perturbations to Guide Neural Ranking Models
Axiomatic approaches aim to utilize reasonable retrieval constraints to guide the search for optimal retrieval models. Existing studies have shown the effectiveness of axiomatic approaches in improving the performance through either the derivation of new basic retrieval models or modifications of existing ones. Recently, neural network models have attracted more attention in the research community. Since these models are learned from training data, it would be interesting to study how to utilize the axiomatic approaches to guide the training process so that the learned models can satisfy retrieval constraints and achieve better retrieval performance. In this paper, we propose to utilize axiomatic perturbations to construct training data sets for neural ranking models. The perturbed data sets are constructed in a way to amplify the desirable properties that any reasonable retrieval models should satisfy. As a result, the models learned from the perturbed data sets are expected to satisfy more retrieval constraints and lead to better retrieval performance. Experiment results show that the models learned from the perturbed data sets indeed perform better than those learned from the original data sets.