Minghao Mo , Weihai Lu , Qixiao Xie , Zikai Xiao , Xiang Lv , Hong Yang , Yanchun Zhang
{"title":"One multimodal plugin enhancing all: CLIP-based pre-training framework enhancing multimodal item representations in recommendation systems","authors":"Minghao Mo , Weihai Lu , Qixiao Xie , Zikai Xiao , Xiang Lv , Hong Yang , Yanchun Zhang","doi":"10.1016/j.neucom.2025.130059","DOIUrl":null,"url":null,"abstract":"<div><div>With advances in multimodal pre-training, more efforts focus on integrating it into recommendation models. Current methods mainly focus on utilizing multimodal pre-training models to obtain multimodal representations of items and designing specific model architectures for downstream tasks. However, these methods often neglect the suitability of multimodal representations for recommendation systems since the pre-training is not conducted on recommendation datasets, making the directly obtained representations potentially suboptimal due to semantic biases from domain discrepancy and noise interference. Furthermore, collaborative information, a key element in recommendation systems, significantly impacts the effectiveness of recommendation models, but existing advanced multimodal pre-training models (e.g., CLIP) are unable to capture the collaborative information of items. To bridge the gap between multimodal pre-training models and recommendation systems, we propose a novel multimodal pre-training framework <strong>C</strong>LIP-based <strong>P</strong>re-training <strong>M</strong>ulti<strong>M</strong>odal (CPMM) item representations model for recommendation. First, the representations of images, text, and IDs are mapped to a new low-dimensional contrastive representation space for alignment and semantic enhancement, ensuring the consistency and robustness of the multimodal contrastive representation (MCR). A contrastive learning approach is designed to regulate the inter-modal distances, mitigating the impact of noise on recommendation performance. Finally, modeling of the first-order similarities of the items is conducted, thereby integrating the collaborative information of the items into the multimodal contrastive representations. Extensive experiments on Amazon benchmark datasets (Beauty, Toys, Tools) validate CPMM’s effectiveness across three core recommendation tasks: sequential recommendation, collaborative filtering, and click-through rate prediction.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"637 ","pages":"Article 130059"},"PeriodicalIF":5.5000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225007313","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
With advances in multimodal pre-training, more efforts focus on integrating it into recommendation models. Current methods mainly focus on utilizing multimodal pre-training models to obtain multimodal representations of items and designing specific model architectures for downstream tasks. However, these methods often neglect the suitability of multimodal representations for recommendation systems since the pre-training is not conducted on recommendation datasets, making the directly obtained representations potentially suboptimal due to semantic biases from domain discrepancy and noise interference. Furthermore, collaborative information, a key element in recommendation systems, significantly impacts the effectiveness of recommendation models, but existing advanced multimodal pre-training models (e.g., CLIP) are unable to capture the collaborative information of items. To bridge the gap between multimodal pre-training models and recommendation systems, we propose a novel multimodal pre-training framework CLIP-based Pre-training MultiModal (CPMM) item representations model for recommendation. First, the representations of images, text, and IDs are mapped to a new low-dimensional contrastive representation space for alignment and semantic enhancement, ensuring the consistency and robustness of the multimodal contrastive representation (MCR). A contrastive learning approach is designed to regulate the inter-modal distances, mitigating the impact of noise on recommendation performance. Finally, modeling of the first-order similarities of the items is conducted, thereby integrating the collaborative information of the items into the multimodal contrastive representations. Extensive experiments on Amazon benchmark datasets (Beauty, Toys, Tools) validate CPMM’s effectiveness across three core recommendation tasks: sequential recommendation, collaborative filtering, and click-through rate prediction.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.