{"title":"列表神经排序模型","authors":"Razieh Rahimi, Ali Montazeralghaem, J. Allan","doi":"10.1145/3341981.3344245","DOIUrl":null,"url":null,"abstract":"Several neural networks have been developed for end-to-end training of information retrieval models. These networks differ in many aspects including architecture, training data, data representations, and loss functions. However, only pointwise and pairwise loss functions are employed in training of end-to-end neural ranking models without human-engineered features. These loss functions do not consider the ranks of documents in the estimation of loss over training data. Because of this limitation, conventional learning-to-rank models using pointwise or pairwise loss functions have generally shown lower performance compared to those using listwise loss functions. Following this observation, we propose to employ listwise loss functions for the training of neural ranking models. We empirically demonstrate that a listwise neural ranker outperforms a pairwise neural ranking model. In addition, we achieve further improvements in the performance of the listwise neural ranking models by query-based sampling of training data.","PeriodicalId":173154,"journal":{"name":"Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Listwise Neural Ranking Models\",\"authors\":\"Razieh Rahimi, Ali Montazeralghaem, J. Allan\",\"doi\":\"10.1145/3341981.3344245\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Several neural networks have been developed for end-to-end training of information retrieval models. These networks differ in many aspects including architecture, training data, data representations, and loss functions. However, only pointwise and pairwise loss functions are employed in training of end-to-end neural ranking models without human-engineered features. These loss functions do not consider the ranks of documents in the estimation of loss over training data. Because of this limitation, conventional learning-to-rank models using pointwise or pairwise loss functions have generally shown lower performance compared to those using listwise loss functions. Following this observation, we propose to employ listwise loss functions for the training of neural ranking models. We empirically demonstrate that a listwise neural ranker outperforms a pairwise neural ranking model. In addition, we achieve further improvements in the performance of the listwise neural ranking models by query-based sampling of training data.\",\"PeriodicalId\":173154,\"journal\":{\"name\":\"Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3341981.3344245\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3341981.3344245","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Several neural networks have been developed for end-to-end training of information retrieval models. These networks differ in many aspects including architecture, training data, data representations, and loss functions. However, only pointwise and pairwise loss functions are employed in training of end-to-end neural ranking models without human-engineered features. These loss functions do not consider the ranks of documents in the estimation of loss over training data. Because of this limitation, conventional learning-to-rank models using pointwise or pairwise loss functions have generally shown lower performance compared to those using listwise loss functions. Following this observation, we propose to employ listwise loss functions for the training of neural ranking models. We empirically demonstrate that a listwise neural ranker outperforms a pairwise neural ranking model. In addition, we achieve further improvements in the performance of the listwise neural ranking models by query-based sampling of training data.