{"title":"Towards user-specific multimodal recommendation via cross-modal attention-enhanced graph convolution network","authors":"Ruidong Wang, Chao Li, Zhongying Zhao","doi":"10.1007/s10489-024-06061-1","DOIUrl":null,"url":null,"abstract":"<div><p>Multimodal Recommendation (MR) exploits multimodal features of items (e.g., visual or textual features) to provide personalized recommendations for users. Recently, scholars have integrated Graph Convolutional Networks (GCN) into MR to model complicated multimodal relationships, but still with two significant challenges: (1) Most MR methods fail to consider the correlations between different modalities, which significantly affects the modal alignment, resulting in poor performance on MR tasks. (2) Most MR methods leverage multimodal features to enhance item representation learning. However, the connection between multimodal features and user representations remains largely unexplored. To this end, we propose a novel yet effective Cross-modal Attention-enhanced graph convolution network for user-specific Multimodal Recommendation, named CAMR. Specifically, we design a cross-modal attention mechanism to mine the cross-modal correlations. In addition, we devise a modality-aware user feature learning method that uses rich item information to learn user feature representations. Experimental results on four real-world datasets demonstrate the superiority of CAMR compared with several state-of-the-art methods. The codes of this work are available at https://github.com/ZZY-GraphMiningLab/CAMR</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 1","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-024-06061-1","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal Recommendation (MR) exploits multimodal features of items (e.g., visual or textual features) to provide personalized recommendations for users. Recently, scholars have integrated Graph Convolutional Networks (GCN) into MR to model complicated multimodal relationships, but still with two significant challenges: (1) Most MR methods fail to consider the correlations between different modalities, which significantly affects the modal alignment, resulting in poor performance on MR tasks. (2) Most MR methods leverage multimodal features to enhance item representation learning. However, the connection between multimodal features and user representations remains largely unexplored. To this end, we propose a novel yet effective Cross-modal Attention-enhanced graph convolution network for user-specific Multimodal Recommendation, named CAMR. Specifically, we design a cross-modal attention mechanism to mine the cross-modal correlations. In addition, we devise a modality-aware user feature learning method that uses rich item information to learn user feature representations. Experimental results on four real-world datasets demonstrate the superiority of CAMR compared with several state-of-the-art methods. The codes of this work are available at https://github.com/ZZY-GraphMiningLab/CAMR
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.