{"title":"Synergistic fusion framework: Integrating training and non-training processes for accelerated graph convolution network-based recommendation","authors":"Fan Mo , Xin Fan , Chongxian Chen , Hayato Yamana","doi":"10.1016/j.patcog.2025.111829","DOIUrl":null,"url":null,"abstract":"<div><div>The training and inference (generating recommendation lists) of Graph convolution networks (GCN)-based recommendation models are time-consuming. Existing techniques aim to improve the training speed by proposing new GCN variants. However, the development of GCN leads to multiple technological branches using graph-enhancement techniques, including subgraph and edge sampling techniques. Simply proposing a GCN variant for training acceleration is inadequate, lacking a generalized training acceleration framework for multiple GCN models. Another weakness of previous studies is neglecting the importance of inference speed. This study introduces a candidate-based fusion framework to accelerate the training and inference of GCN models. The idea for training acceleration is to achieve layer compression by aggregating information directly from candidate items generated in a non-training process. Besides, we achieve inference acceleration by ranking items only in the candidate sets. The proposed framework is generalized across six state-of-the-art GCN models. Experimental results confirm the effectiveness of the method.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"167 ","pages":"Article 111829"},"PeriodicalIF":7.5000,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325004893","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The training and inference (generating recommendation lists) of Graph convolution networks (GCN)-based recommendation models are time-consuming. Existing techniques aim to improve the training speed by proposing new GCN variants. However, the development of GCN leads to multiple technological branches using graph-enhancement techniques, including subgraph and edge sampling techniques. Simply proposing a GCN variant for training acceleration is inadequate, lacking a generalized training acceleration framework for multiple GCN models. Another weakness of previous studies is neglecting the importance of inference speed. This study introduces a candidate-based fusion framework to accelerate the training and inference of GCN models. The idea for training acceleration is to achieve layer compression by aggregating information directly from candidate items generated in a non-training process. Besides, we achieve inference acceleration by ranking items only in the candidate sets. The proposed framework is generalized across six state-of-the-art GCN models. Experimental results confirm the effectiveness of the method.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.