基于多视角超图的对比学习模型用于冷启动微视频推荐

Sisuo Lyu, Xiuze Zhou, Xuming Hu
{"title":"基于多视角超图的对比学习模型用于冷启动微视频推荐","authors":"Sisuo Lyu, Xiuze Zhou, Xuming Hu","doi":"arxiv-2409.09638","DOIUrl":null,"url":null,"abstract":"With the widespread use of mobile devices and the rapid growth of micro-video\nplatforms such as TikTok and Kwai, the demand for personalized micro-video\nrecommendation systems has significantly increased. Micro-videos typically\ncontain diverse information, such as textual metadata, visual cues (e.g., cover\nimages), and dynamic video content, significantly affecting user interaction\nand engagement patterns. However, most existing approaches often suffer from\nthe problem of over-smoothing, which limits their ability to capture\ncomprehensive interaction information effectively. Additionally, cold-start\nscenarios present ongoing challenges due to sparse interaction data and the\nunderutilization of available interaction signals. To address these issues, we propose a Multi-view Hypergraph-based Contrastive\nlearning model for cold-start micro-video Recommendation (MHCR). MHCR\nintroduces a multi-view multimodal feature extraction layer to capture\ninteraction signals from various perspectives and incorporates multi-view\nself-supervised learning tasks to provide additional supervisory signals.\nThrough extensive experiments on two real-world datasets, we show that MHCR\nsignificantly outperforms existing video recommendation models and effectively\nmitigates cold-start challenges. Our code is available at\nhttps://anonymous.4open.science/r/MHCR-02EF.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"14 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-view Hypergraph-based Contrastive Learning Model for Cold-Start Micro-video Recommendation\",\"authors\":\"Sisuo Lyu, Xiuze Zhou, Xuming Hu\",\"doi\":\"arxiv-2409.09638\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the widespread use of mobile devices and the rapid growth of micro-video\\nplatforms such as TikTok and Kwai, the demand for personalized micro-video\\nrecommendation systems has significantly increased. Micro-videos typically\\ncontain diverse information, such as textual metadata, visual cues (e.g., cover\\nimages), and dynamic video content, significantly affecting user interaction\\nand engagement patterns. However, most existing approaches often suffer from\\nthe problem of over-smoothing, which limits their ability to capture\\ncomprehensive interaction information effectively. Additionally, cold-start\\nscenarios present ongoing challenges due to sparse interaction data and the\\nunderutilization of available interaction signals. To address these issues, we propose a Multi-view Hypergraph-based Contrastive\\nlearning model for cold-start micro-video Recommendation (MHCR). MHCR\\nintroduces a multi-view multimodal feature extraction layer to capture\\ninteraction signals from various perspectives and incorporates multi-view\\nself-supervised learning tasks to provide additional supervisory signals.\\nThrough extensive experiments on two real-world datasets, we show that MHCR\\nsignificantly outperforms existing video recommendation models and effectively\\nmitigates cold-start challenges. Our code is available at\\nhttps://anonymous.4open.science/r/MHCR-02EF.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"14 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09638\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09638","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着移动设备的广泛使用以及 TikTok 和 Kwai 等微视频平台的快速发展,对个性化微视频推荐系统的需求显著增加。微视频通常包含多种信息,如文本元数据、视觉线索(如封面图片)和动态视频内容,这极大地影响了用户的互动和参与模式。然而,大多数现有方法往往存在过度平滑的问题,这限制了它们有效捕捉全面交互信息的能力。此外,由于交互数据稀少和对可用交互信号的利用不足,冷启动情景也带来了持续的挑战。为了解决这些问题,我们提出了基于多视角超图的冷启动微视频推荐对比学习模型(MHCR)。MHCR 引入了多视角多模态特征提取层来捕捉来自不同视角的交互信号,并结合多视角自我监督学习任务来提供额外的监督信号。通过在两个真实世界数据集上的广泛实验,我们表明 MHCR 的性能明显优于现有的视频推荐模型,并有效地缓解了冷启动挑战。我们的代码可在https://anonymous.4open.science/r/MHCR-02EF。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multi-view Hypergraph-based Contrastive Learning Model for Cold-Start Micro-video Recommendation
With the widespread use of mobile devices and the rapid growth of micro-video platforms such as TikTok and Kwai, the demand for personalized micro-video recommendation systems has significantly increased. Micro-videos typically contain diverse information, such as textual metadata, visual cues (e.g., cover images), and dynamic video content, significantly affecting user interaction and engagement patterns. However, most existing approaches often suffer from the problem of over-smoothing, which limits their ability to capture comprehensive interaction information effectively. Additionally, cold-start scenarios present ongoing challenges due to sparse interaction data and the underutilization of available interaction signals. To address these issues, we propose a Multi-view Hypergraph-based Contrastive learning model for cold-start micro-video Recommendation (MHCR). MHCR introduces a multi-view multimodal feature extraction layer to capture interaction signals from various perspectives and incorporates multi-view self-supervised learning tasks to provide additional supervisory signals. Through extensive experiments on two real-world datasets, we show that MHCR significantly outperforms existing video recommendation models and effectively mitigates cold-start challenges. Our code is available at https://anonymous.4open.science/r/MHCR-02EF.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信