基于多维量化编码的序列推荐模型嵌入表压缩

Feng Wang, Miaomiao Dai, Xudong Li, Liquan Pan
{"title":"基于多维量化编码的序列推荐模型嵌入表压缩","authors":"Feng Wang, Miaomiao Dai, Xudong Li, Liquan Pan","doi":"10.1145/3507971.3508010","DOIUrl":null,"url":null,"abstract":"Sequential recommender models have become a research hotspot in the field of current recommender systems due to its excellent ability to describe users’ dynamic preferences. Sequential recommender models based on deep learning have achieved state-of-the-art results. However, with the increasing number of users and items, the traditional item embedding table may consume a huge amount of memory so that the model may be more difficult to deploy to resource-limited devices. In this paper, we propose a multi-dimensional quantization encoding(MDQE) method to resolve this issue. MDQE mainly consists of two compression techniques. We first divide items into several groups according to the interaction frequency of items and assign different dimensions to each group to construct multi-dimensional group-wise embedding tables. Then, we use mapping matrices to transform the multi-dimensional group-wise embedding tables into quantized codebooks for further compressing. The experiments on three real-world datasets demonstrate that the proposed MDQE can achieve up to 13.86x compression ratio with negligible accuracy loss during inference.","PeriodicalId":439757,"journal":{"name":"Proceedings of the 7th International Conference on Communication and Information Processing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Compressing Embedding Table via Multi-dimensional Quantization Encoding for Sequential Recommender Model\",\"authors\":\"Feng Wang, Miaomiao Dai, Xudong Li, Liquan Pan\",\"doi\":\"10.1145/3507971.3508010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sequential recommender models have become a research hotspot in the field of current recommender systems due to its excellent ability to describe users’ dynamic preferences. Sequential recommender models based on deep learning have achieved state-of-the-art results. However, with the increasing number of users and items, the traditional item embedding table may consume a huge amount of memory so that the model may be more difficult to deploy to resource-limited devices. In this paper, we propose a multi-dimensional quantization encoding(MDQE) method to resolve this issue. MDQE mainly consists of two compression techniques. We first divide items into several groups according to the interaction frequency of items and assign different dimensions to each group to construct multi-dimensional group-wise embedding tables. Then, we use mapping matrices to transform the multi-dimensional group-wise embedding tables into quantized codebooks for further compressing. The experiments on three real-world datasets demonstrate that the proposed MDQE can achieve up to 13.86x compression ratio with negligible accuracy loss during inference.\",\"PeriodicalId\":439757,\"journal\":{\"name\":\"Proceedings of the 7th International Conference on Communication and Information Processing\",\"volume\":\"67 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 7th International Conference on Communication and Information Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3507971.3508010\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Communication and Information Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3507971.3508010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

序贯推荐模型由于具有良好的描述用户动态偏好的能力,成为当前推荐系统领域的研究热点。基于深度学习的顺序推荐模型已经取得了最先进的结果。然而,随着用户和项目数量的增加,传统的项目嵌入表可能会消耗大量的内存,使得模型更难以部署到资源有限的设备上。本文提出了一种多维量化编码(MDQE)方法来解决这个问题。MDQE主要包括两种压缩技术。我们首先根据项目的交互频率将项目分成若干组,并为每组分配不同的维度,构建多维分组嵌入表。然后,我们使用映射矩阵将多维分组嵌入表转换为量化码本,以便进一步压缩。在三个真实数据集上的实验表明,所提出的MDQE可以实现高达13.86倍的压缩比,并且在推理过程中精度损失可以忽略不计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Compressing Embedding Table via Multi-dimensional Quantization Encoding for Sequential Recommender Model
Sequential recommender models have become a research hotspot in the field of current recommender systems due to its excellent ability to describe users’ dynamic preferences. Sequential recommender models based on deep learning have achieved state-of-the-art results. However, with the increasing number of users and items, the traditional item embedding table may consume a huge amount of memory so that the model may be more difficult to deploy to resource-limited devices. In this paper, we propose a multi-dimensional quantization encoding(MDQE) method to resolve this issue. MDQE mainly consists of two compression techniques. We first divide items into several groups according to the interaction frequency of items and assign different dimensions to each group to construct multi-dimensional group-wise embedding tables. Then, we use mapping matrices to transform the multi-dimensional group-wise embedding tables into quantized codebooks for further compressing. The experiments on three real-world datasets demonstrate that the proposed MDQE can achieve up to 13.86x compression ratio with negligible accuracy loss during inference.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信