{"title":"大规模生产环境中冷启动技术评估框架","authors":"moran haham","doi":"10.1145/3523227.3547385","DOIUrl":null,"url":null,"abstract":"In recommender systems, cold-start issues are situations where no previous events (e.g., ratings), are known for certain users or items. Mitigating cold-start situations is a fundamental problem in almost any recommender system [3, 5]. In real-life, large-scale production systems, the challenge of optimizing the cold-start strategy is even greater. We present an end-to-end framework for evaluating and comparing different cold-start strategies. By applying this framework in Outbrain’s recommender system, we were able to reduce our cold-start costs by half, while supporting both offline and online settings. Our framework solves the pain of benchmarking numerous cold-start techniques using surrogate accuracy metrics on offline datasets - coupled with an extensive, cost-controlled online A/B test. In this abstract, We’ll start with a short introduction to the cold-start challenge in recommender systems. Next, we will explain the motivation for a framework for cold-start techniques. Lastly, we will then describe - step by step - how we used the framework to reduce our exploration by more than 50%.","PeriodicalId":443279,"journal":{"name":"Proceedings of the 16th ACM Conference on Recommender Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluation Framework for Cold-Start Techniques in Large-Scale Production Settings\",\"authors\":\"moran haham\",\"doi\":\"10.1145/3523227.3547385\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recommender systems, cold-start issues are situations where no previous events (e.g., ratings), are known for certain users or items. Mitigating cold-start situations is a fundamental problem in almost any recommender system [3, 5]. In real-life, large-scale production systems, the challenge of optimizing the cold-start strategy is even greater. We present an end-to-end framework for evaluating and comparing different cold-start strategies. By applying this framework in Outbrain’s recommender system, we were able to reduce our cold-start costs by half, while supporting both offline and online settings. Our framework solves the pain of benchmarking numerous cold-start techniques using surrogate accuracy metrics on offline datasets - coupled with an extensive, cost-controlled online A/B test. In this abstract, We’ll start with a short introduction to the cold-start challenge in recommender systems. Next, we will explain the motivation for a framework for cold-start techniques. Lastly, we will then describe - step by step - how we used the framework to reduce our exploration by more than 50%.\",\"PeriodicalId\":443279,\"journal\":{\"name\":\"Proceedings of the 16th ACM Conference on Recommender Systems\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 16th ACM Conference on Recommender Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3523227.3547385\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th ACM Conference on Recommender Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3523227.3547385","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluation Framework for Cold-Start Techniques in Large-Scale Production Settings
In recommender systems, cold-start issues are situations where no previous events (e.g., ratings), are known for certain users or items. Mitigating cold-start situations is a fundamental problem in almost any recommender system [3, 5]. In real-life, large-scale production systems, the challenge of optimizing the cold-start strategy is even greater. We present an end-to-end framework for evaluating and comparing different cold-start strategies. By applying this framework in Outbrain’s recommender system, we were able to reduce our cold-start costs by half, while supporting both offline and online settings. Our framework solves the pain of benchmarking numerous cold-start techniques using surrogate accuracy metrics on offline datasets - coupled with an extensive, cost-controlled online A/B test. In this abstract, We’ll start with a short introduction to the cold-start challenge in recommender systems. Next, we will explain the motivation for a framework for cold-start techniques. Lastly, we will then describe - step by step - how we used the framework to reduce our exploration by more than 50%.