Haebin Lim, Qinglong Li, Sigeon Yang, Jaekyeong Kim
{"title":"A BERT-Based Multi-Embedding Fusion Method Using Review Text for Recommendation","authors":"Haebin Lim, Qinglong Li, Sigeon Yang, Jaekyeong Kim","doi":"10.1111/exsy.70041","DOIUrl":null,"url":null,"abstract":"<p>Collaborative filtering is a widely used method in recommender systems research. However, contrary to the assumption that it relies solely on rating data, many contemporary models incorporate review information to address issues such as data sparsity. Although previous recommender systems utilised review texts to capture user preferences and item features, they often rely on a single-embedding model to represent these features, which may limit the richness of the extracted information. Recent advancements suggest that combining multiple pre-trained embedding models can enhance text representation by leveraging the strengths of different encoding methods. In this study, we propose a novel recommender system model, the Multi-embedding Fusion Network for Recommendation (MFNR), which employs a multi-embedding approach to effectively capture and represent user and item features in review texts. Specifically, the proposed model integrates Bidirectional Encoder Representations from Transformers (BERT) and its optimised variant, RoBERTa, both of which are pre-trained transformer-based models designed for natural language understanding. By leveraging their contextual embeddings, our model extracts enriched feature representations from review texts. Extensive experiments conducted on real-world review datasets from Amazon.com and Goodreads.com demonstrate that MFNR significantly outperforms existing baseline models, achieving an average improvement of 9.18% in RMSE and 14.81% in MAE. These results highlight the efficacy of the multi-embedding approach, indicating its potential for broader application in complex recommendation scenarios.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"42 5","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/exsy.70041","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/exsy.70041","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Collaborative filtering is a widely used method in recommender systems research. However, contrary to the assumption that it relies solely on rating data, many contemporary models incorporate review information to address issues such as data sparsity. Although previous recommender systems utilised review texts to capture user preferences and item features, they often rely on a single-embedding model to represent these features, which may limit the richness of the extracted information. Recent advancements suggest that combining multiple pre-trained embedding models can enhance text representation by leveraging the strengths of different encoding methods. In this study, we propose a novel recommender system model, the Multi-embedding Fusion Network for Recommendation (MFNR), which employs a multi-embedding approach to effectively capture and represent user and item features in review texts. Specifically, the proposed model integrates Bidirectional Encoder Representations from Transformers (BERT) and its optimised variant, RoBERTa, both of which are pre-trained transformer-based models designed for natural language understanding. By leveraging their contextual embeddings, our model extracts enriched feature representations from review texts. Extensive experiments conducted on real-world review datasets from Amazon.com and Goodreads.com demonstrate that MFNR significantly outperforms existing baseline models, achieving an average improvement of 9.18% in RMSE and 14.81% in MAE. These results highlight the efficacy of the multi-embedding approach, indicating its potential for broader application in complex recommendation scenarios.
期刊介绍:
Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper.
As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.