{"title":"Recommender Systems Algorithm Selection for Ranking Prediction on Implicit Feedback Datasets","authors":"Lukas Wegmeth, Tobias Vente, Joeran Beel","doi":"arxiv-2409.05461","DOIUrl":null,"url":null,"abstract":"The recommender systems algorithm selection problem for ranking prediction on\nimplicit feedback datasets is under-explored. Traditional approaches in\nrecommender systems algorithm selection focus predominantly on rating\nprediction on explicit feedback datasets, leaving a research gap for ranking\nprediction on implicit feedback datasets. Algorithm selection is a critical\nchallenge for nearly every practitioner in recommender systems. In this work,\nwe take the first steps toward addressing this research gap. We evaluate the\nNDCG@10 of 24 recommender systems algorithms, each with two hyperparameter\nconfigurations, on 72 recommender systems datasets. We train four optimized\nmachine-learning meta-models and one automated machine-learning meta-model with\nthree different settings on the resulting meta-dataset. Our results show that\nthe predictions of all tested meta-models exhibit a median Spearman correlation\nranging from 0.857 to 0.918 with the ground truth. We show that the median\nSpearman correlation between meta-model predictions and the ground truth\nincreases by an average of 0.124 when the meta-model is optimized to predict\nthe ranking of algorithms instead of their performance. Furthermore, in terms\nof predicting the best algorithm for an unknown dataset, we demonstrate that\nthe best optimized traditional meta-model, e.g., XGBoost, achieves a recall of\n48.6%, outperforming the best tested automated machine learning meta-model,\ne.g., AutoGluon, which achieves a recall of 47.2%.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05461","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The recommender systems algorithm selection problem for ranking prediction on
implicit feedback datasets is under-explored. Traditional approaches in
recommender systems algorithm selection focus predominantly on rating
prediction on explicit feedback datasets, leaving a research gap for ranking
prediction on implicit feedback datasets. Algorithm selection is a critical
challenge for nearly every practitioner in recommender systems. In this work,
we take the first steps toward addressing this research gap. We evaluate the
NDCG@10 of 24 recommender systems algorithms, each with two hyperparameter
configurations, on 72 recommender systems datasets. We train four optimized
machine-learning meta-models and one automated machine-learning meta-model with
three different settings on the resulting meta-dataset. Our results show that
the predictions of all tested meta-models exhibit a median Spearman correlation
ranging from 0.857 to 0.918 with the ground truth. We show that the median
Spearman correlation between meta-model predictions and the ground truth
increases by an average of 0.124 when the meta-model is optimized to predict
the ranking of algorithms instead of their performance. Furthermore, in terms
of predicting the best algorithm for an unknown dataset, we demonstrate that
the best optimized traditional meta-model, e.g., XGBoost, achieves a recall of
48.6%, outperforming the best tested automated machine learning meta-model,
e.g., AutoGluon, which achieves a recall of 47.2%.