Chinmay Mahajan, Ashwin Krishnan, M. Nambiar, Rekha Singhal
{"title":"Hetero-Rec: Optimal Deployment of Embeddings for High-Speed Recommendations","authors":"Chinmay Mahajan, Ashwin Krishnan, M. Nambiar, Rekha Singhal","doi":"10.1145/3564121.3564134","DOIUrl":null,"url":null,"abstract":"We see two trends emerging due to exponential increase in AI research- rise in adoption of AI based models in enterprise applications and development of different types of hardware accelerators with varying memory and computing architectures for accelerating AI workloads. Accelerators may have different types of memories, varying on access latency and storage capacity. A recommendation model’s inference latency is highly influenced by the time to fetch embeddings from the embedding tables. In this paper, we present Hetero-Rec, a framework for optimal deployment of embeddings for faster inference of recommendation model. The main idea is to cache frequently accessed embeddings on faster memories to reduce average latency during inference. Hetero-Rec uses performance model-based optimization algorithm and use of spline based learned index for determining the optimal reservation of portions of embedding tables across different memory types available for deployment, based on their past access patterns. We validate our approach for heterogeneous memory architectures, such as URAM (Ultra-Random Access Memory), BRAM (Block Random Access Memory), HBM (High-Bandwidth Memory) and DDR (Double Data Rate) on a server platform with an FPGA accelerator. We observe that the presented optimization algorithm for dynamic placement of embedding tables yields a reduction on average latency of up to 1.52x, 1.68x, and 2.91x for the weekly, daily, and hourly access patterns, respectively in the transaction history as compared to the state-of-the-art systems.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"66 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Second International Conference on AI-ML Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564121.3564134","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We see two trends emerging due to exponential increase in AI research- rise in adoption of AI based models in enterprise applications and development of different types of hardware accelerators with varying memory and computing architectures for accelerating AI workloads. Accelerators may have different types of memories, varying on access latency and storage capacity. A recommendation model’s inference latency is highly influenced by the time to fetch embeddings from the embedding tables. In this paper, we present Hetero-Rec, a framework for optimal deployment of embeddings for faster inference of recommendation model. The main idea is to cache frequently accessed embeddings on faster memories to reduce average latency during inference. Hetero-Rec uses performance model-based optimization algorithm and use of spline based learned index for determining the optimal reservation of portions of embedding tables across different memory types available for deployment, based on their past access patterns. We validate our approach for heterogeneous memory architectures, such as URAM (Ultra-Random Access Memory), BRAM (Block Random Access Memory), HBM (High-Bandwidth Memory) and DDR (Double Data Rate) on a server platform with an FPGA accelerator. We observe that the presented optimization algorithm for dynamic placement of embedding tables yields a reduction on average latency of up to 1.52x, 1.68x, and 2.91x for the weekly, daily, and hourly access patterns, respectively in the transaction history as compared to the state-of-the-art systems.