{"title":"Multi-armed recommender system bandit ensembles","authors":"Rocío Cañamares, Marcos Redondo, P. Castells","doi":"10.1145/3298689.3346984","DOIUrl":null,"url":null,"abstract":"It has long been found that well-configured recommender system ensembles can achieve better effectiveness than the combined systems separately. Sophisticated approaches have been developed to automatically optimize the ensembles' configuration to maximize their performance gains. However most work in this area has targeted simplified scenarios where algorithms are tested and compared on a single non-interactive run. In this paper we consider a more realistic perspective bearing in mind the cyclic nature of the recommendation task, where a large part of the system's input is collected from the reaction of users to the recommendations they are delivered. The cyclic process provides the opportunity for ensembles to observe and learn about the effectiveness of the combined algorithms, and improve the ensemble configuration progressively. In this paper we explore the adaptation of a multi-armed bandit approach to achieve this, by representing the combined systems as arms, and the ensemble as a bandit that at each step selects an arm to produce the next round of recommendations. We report experiments showing the effectiveness of this approach compared to ensembles that lack the iterative perspective. Along the way, we find illustrative pitfall examples that can result from common, single-shot offline evaluation setups.","PeriodicalId":215384,"journal":{"name":"Proceedings of the 13th ACM Conference on Recommender Systems","volume":"201 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 13th ACM Conference on Recommender Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3298689.3346984","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22
Abstract
It has long been found that well-configured recommender system ensembles can achieve better effectiveness than the combined systems separately. Sophisticated approaches have been developed to automatically optimize the ensembles' configuration to maximize their performance gains. However most work in this area has targeted simplified scenarios where algorithms are tested and compared on a single non-interactive run. In this paper we consider a more realistic perspective bearing in mind the cyclic nature of the recommendation task, where a large part of the system's input is collected from the reaction of users to the recommendations they are delivered. The cyclic process provides the opportunity for ensembles to observe and learn about the effectiveness of the combined algorithms, and improve the ensemble configuration progressively. In this paper we explore the adaptation of a multi-armed bandit approach to achieve this, by representing the combined systems as arms, and the ensemble as a bandit that at each step selects an arm to produce the next round of recommendations. We report experiments showing the effectiveness of this approach compared to ensembles that lack the iterative perspective. Along the way, we find illustrative pitfall examples that can result from common, single-shot offline evaluation setups.