{"title":"Scalable High-Performance Architecture for Evolving Recommender System","authors":"R. Singh, Mayank Mishra, Rekha Singhal","doi":"10.1145/3578356.3592594","DOIUrl":null,"url":null,"abstract":"Recommender systems are expected to scale to the requirement of the large number of recommendations made to the customers and to keep the latency of recommendations within a stringent limit. Such requirements make architecting a recommender system a challenge. This challenge is exacerbated when different ML/DL models are employed simultaneously. This paper presents how we accelerated a recommender system that contained a state-of-the-art Graph neural network (GNN) based DL model and a dot product-based ML model. The ML model was used offline, where its recommendations were cached, and the GNN-based model provided recommendations in real time. The merging of offline results with the results provided by the real-time session-based recommendation model again posed a challenge for latency. We could reduce the model's recommendation latency from 1.5 seconds to under 65 milliseconds with careful re-architecting. We also improved the throughput from 1 recommendation per second to 1500 recommendations per second on a VM with 16-core CPU and 64 GB RAM.","PeriodicalId":370204,"journal":{"name":"Proceedings of the 3rd Workshop on Machine Learning and Systems","volume":"2009 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd Workshop on Machine Learning and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3578356.3592594","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recommender systems are expected to scale to the requirement of the large number of recommendations made to the customers and to keep the latency of recommendations within a stringent limit. Such requirements make architecting a recommender system a challenge. This challenge is exacerbated when different ML/DL models are employed simultaneously. This paper presents how we accelerated a recommender system that contained a state-of-the-art Graph neural network (GNN) based DL model and a dot product-based ML model. The ML model was used offline, where its recommendations were cached, and the GNN-based model provided recommendations in real time. The merging of offline results with the results provided by the real-time session-based recommendation model again posed a challenge for latency. We could reduce the model's recommendation latency from 1.5 seconds to under 65 milliseconds with careful re-architecting. We also improved the throughput from 1 recommendation per second to 1500 recommendations per second on a VM with 16-core CPU and 64 GB RAM.