{"title":"A pairwise learning to rank algorithm based on bounded loss and preference weight","authors":"Xianlun Tang, Deyi Xiong, Jiaxin Li, Yali Wan","doi":"10.1109/CAC.2017.8244044","DOIUrl":null,"url":null,"abstract":"Traditional pairwise learning to rank algorithms pay little attention to top ranked documents in the query list, and do not work well when they are used on a data set with multiple rating grades. In this paper, a novel pairwise learning to rank algorithm is proposed to solve this problem. This algorithm defines a bounded loss function and introduces the preference weights between document pairs into it. Because the batch gradient descent method will lead to slow optimization and the stochastic gradient descent method will be easily affected by noises, a mini-batch gradient descent method is proposed to optimize the algorithm, which makes the number of iteration no longer dependent on the size of samples. Finally, experiments on OHSUMED data set and MQ2008 data set demonstrate the effectiveness of the proposed algorithm.","PeriodicalId":116872,"journal":{"name":"2017 Chinese Automation Congress (CAC)","volume":"176 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Chinese Automation Congress (CAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAC.2017.8244044","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Traditional pairwise learning to rank algorithms pay little attention to top ranked documents in the query list, and do not work well when they are used on a data set with multiple rating grades. In this paper, a novel pairwise learning to rank algorithm is proposed to solve this problem. This algorithm defines a bounded loss function and introduces the preference weights between document pairs into it. Because the batch gradient descent method will lead to slow optimization and the stochastic gradient descent method will be easily affected by noises, a mini-batch gradient descent method is proposed to optimize the algorithm, which makes the number of iteration no longer dependent on the size of samples. Finally, experiments on OHSUMED data set and MQ2008 data set demonstrate the effectiveness of the proposed algorithm.