One-Class Matrix Completion with Low-Density Factorizations

Vikas Sindhwani, S. Bucak, Jianying Hu, A. Mojsilovic
{"title":"One-Class Matrix Completion with Low-Density Factorizations","authors":"Vikas Sindhwani, S. Bucak, Jianying Hu, A. Mojsilovic","doi":"10.1109/ICDM.2010.164","DOIUrl":null,"url":null,"abstract":"Consider a typical recommendation problem. A company has historical records of products sold to a large customer base. These records may be compactly represented as a sparse customer-times-product ``who-bought-what\" binary matrix. Given this matrix, the goal is to build a model that provides recommendations for which products should be sold next to the existing customer base. Such problems may naturally be formulated as collaborative filtering tasks. However, this is a {\\it one-class} setting, that is, the only known entries in the matrix are one-valued. If a customer has not bought a product yet, it does not imply that the customer has a low propensity to {\\it potentially} be interested in that product. In the absence of entries explicitly labeled as negative examples, one may resort to considering unobserved customer-product pairs as either missing data or as surrogate negative instances. In this paper, we propose an approach to explicitly deal with this kind of ambiguity by instead treating the unobserved entries as optimization variables. These variables are optimized in conjunction with learning a weighted, low-rank non-negative matrix factorization (NMF) of the customer-product matrix, similar to how Transductive SVMs implement the low-density separation principle for semi-supervised learning. Experimental results show that our approach gives significantly better recommendations in comparison to various competing alternatives on one-class collaborative filtering tasks.","PeriodicalId":294061,"journal":{"name":"2010 IEEE International Conference on Data Mining","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"94","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE International Conference on Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM.2010.164","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 94

Abstract

Consider a typical recommendation problem. A company has historical records of products sold to a large customer base. These records may be compactly represented as a sparse customer-times-product ``who-bought-what" binary matrix. Given this matrix, the goal is to build a model that provides recommendations for which products should be sold next to the existing customer base. Such problems may naturally be formulated as collaborative filtering tasks. However, this is a {\it one-class} setting, that is, the only known entries in the matrix are one-valued. If a customer has not bought a product yet, it does not imply that the customer has a low propensity to {\it potentially} be interested in that product. In the absence of entries explicitly labeled as negative examples, one may resort to considering unobserved customer-product pairs as either missing data or as surrogate negative instances. In this paper, we propose an approach to explicitly deal with this kind of ambiguity by instead treating the unobserved entries as optimization variables. These variables are optimized in conjunction with learning a weighted, low-rank non-negative matrix factorization (NMF) of the customer-product matrix, similar to how Transductive SVMs implement the low-density separation principle for semi-supervised learning. Experimental results show that our approach gives significantly better recommendations in comparison to various competing alternatives on one-class collaborative filtering tasks.
低密度分解的一类矩阵补全
考虑一个典型的推荐问题。一家公司拥有销售给大型客户群的产品的历史记录。这些记录可以紧凑地表示为稀疏的客户时间-产品“谁买了什么”二元矩阵。给定这个矩阵,目标是构建一个模型,该模型为哪些产品应该在现有客户群旁边销售提供建议。这类问题可以自然地表述为协同过滤任务。然而,这是一个{\it one-class}设置,也就是说,矩阵中唯一已知的条目是1值的。如果客户还没有购买产品,这并不意味着客户对该产品有低的潜在兴趣倾向。在没有明确标记为负例的条目时,可以将未观察到的客户-产品对视为缺失数据或替代负例。在本文中,我们提出了一种显式处理这种模糊性的方法,即将未观察到的条目视为优化变量。这些变量与学习客户-产品矩阵的加权、低秩非负矩阵分解(NMF)一起进行优化,类似于Transductive svm实现半监督学习的低密度分离原则。实验结果表明,在单类协同过滤任务上,我们的方法比各种竞争方案提供了更好的推荐。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信