{"title":"Collaborative filtering based on GNN with attribute fusion and broad attention.","authors":"MingXue Liu, Min Wang, Baolei Li, Qi Zhong","doi":"10.7717/peerj-cs.2706","DOIUrl":null,"url":null,"abstract":"<p><p>Recommender systems based on collaborative filtering (CF) have been a prominent area of research. In recent years, graph neural networks (GNN) based CF models have effectively addressed the limitations of nonlinearity and higher-order feature interactions in traditional recommendation methods, such as matrix decomposition-based methods and factorization machine approaches, achieving excellent recommendation performance. However, existing GNN-based CF models still have two problems that affect performance improvement. First, although distinguishing between inner interaction and cross interaction, these models still aggregate all attributes indiscriminately. Second, the models do not exploit higher-order interaction information. To address the problems above, this article proposes a collaborative filtering method based on GNN with attribute fusion and broad attention, named GNN-A<sup>2</sup>, which incorporates an inner interaction module with self-attention, a cross interaction module with attribute fusion, and a broad attentive cross module. In summary, GNN-A<sup>2</sup> model performs inner interactions and cross interactions in different ways, then extracts their higher-order interaction information for prediction. We conduct extensive experiments on three benchmark datasets, <i>i.e</i>., MovieLens 1M, Book-crossing, and Taobao. The experimental results demonstrate that our proposed GNN-A<sup>2</sup> model achieves comparable performance on area under the curve (AUC) metric. Notably, GNN-A<sup>2</sup> achieves the optimal performance on Normalized Discounted Cumulative Gain at rank 10 (NDCG@10) over three datasets, with values of 0.9506, 0.9137, and 0.1526, corresponding to respective improvements of 0.68%, 1.57%, and 2.14% compared to the state-of-the-art (SOTA) models. The source code and evaluation datasets are available at: https://github.com/LMXue7/GNN-A2.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2706"},"PeriodicalIF":3.5000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11888940/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2706","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recommender systems based on collaborative filtering (CF) have been a prominent area of research. In recent years, graph neural networks (GNN) based CF models have effectively addressed the limitations of nonlinearity and higher-order feature interactions in traditional recommendation methods, such as matrix decomposition-based methods and factorization machine approaches, achieving excellent recommendation performance. However, existing GNN-based CF models still have two problems that affect performance improvement. First, although distinguishing between inner interaction and cross interaction, these models still aggregate all attributes indiscriminately. Second, the models do not exploit higher-order interaction information. To address the problems above, this article proposes a collaborative filtering method based on GNN with attribute fusion and broad attention, named GNN-A2, which incorporates an inner interaction module with self-attention, a cross interaction module with attribute fusion, and a broad attentive cross module. In summary, GNN-A2 model performs inner interactions and cross interactions in different ways, then extracts their higher-order interaction information for prediction. We conduct extensive experiments on three benchmark datasets, i.e., MovieLens 1M, Book-crossing, and Taobao. The experimental results demonstrate that our proposed GNN-A2 model achieves comparable performance on area under the curve (AUC) metric. Notably, GNN-A2 achieves the optimal performance on Normalized Discounted Cumulative Gain at rank 10 (NDCG@10) over three datasets, with values of 0.9506, 0.9137, and 0.1526, corresponding to respective improvements of 0.68%, 1.57%, and 2.14% compared to the state-of-the-art (SOTA) models. The source code and evaluation datasets are available at: https://github.com/LMXue7/GNN-A2.
期刊介绍:
PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.