Salma S. Elmoghazy, Marwa A. Shouman, Hamdy K. Elminir, Gamal Eldin I. Selim
{"title":"Comparative analysis of methodologies and approaches in recommender systems utilizing large language models","authors":"Salma S. Elmoghazy, Marwa A. Shouman, Hamdy K. Elminir, Gamal Eldin I. Selim","doi":"10.1007/s10462-025-11189-8","DOIUrl":null,"url":null,"abstract":"<div><p>Recommendation systems are indispensable technologies nowadays, as they enable analysis of the huge amount of information available on the internet, helping consumers to make decisions effectively. Ongoing efforts are essential to further develop and align them with the evolving demands of the modern era. In the last few years, large language models (LLMs) have made a huge leap in natural language processing. This advancement has directed researchers’ efforts towards employing these models in various fields, including recommender systems, to leverage the vast amount of data they were trained on. This paper presents a comparative study of a set of recent methodologies that adapt LLMs to recommendations. Throughout the discussed research work, we come up with the insight that LLMs offer significant benefits due to the amount of knowledge they possess and their powerful ability to represent textual data effectively, making them useful in common recommendation issues like cold-start. Also, the variety of fine-tuning and in-context learning techniques enables adaptation of LLMs to a wide range of recommendation tasks. We discussed issues addressed in the reviewed research work and the solutions proposed to enhance recommendation systems. To provide a clearer understanding, we propose taxonomies to categorize the reviewed work based on underlying techniques, involving the role of LLMs in recommendations, learning paradigms, and system structures. We explore datasets, recommendation- and language-related metrics commonly used in this domain. Finally, we analyzed findings in related work, highlighting possible strengths and limitations of using LLMs in recommender systems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 7","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11189-8.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11189-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recommendation systems are indispensable technologies nowadays, as they enable analysis of the huge amount of information available on the internet, helping consumers to make decisions effectively. Ongoing efforts are essential to further develop and align them with the evolving demands of the modern era. In the last few years, large language models (LLMs) have made a huge leap in natural language processing. This advancement has directed researchers’ efforts towards employing these models in various fields, including recommender systems, to leverage the vast amount of data they were trained on. This paper presents a comparative study of a set of recent methodologies that adapt LLMs to recommendations. Throughout the discussed research work, we come up with the insight that LLMs offer significant benefits due to the amount of knowledge they possess and their powerful ability to represent textual data effectively, making them useful in common recommendation issues like cold-start. Also, the variety of fine-tuning and in-context learning techniques enables adaptation of LLMs to a wide range of recommendation tasks. We discussed issues addressed in the reviewed research work and the solutions proposed to enhance recommendation systems. To provide a clearer understanding, we propose taxonomies to categorize the reviewed work based on underlying techniques, involving the role of LLMs in recommendations, learning paradigms, and system structures. We explore datasets, recommendation- and language-related metrics commonly used in this domain. Finally, we analyzed findings in related work, highlighting possible strengths and limitations of using LLMs in recommender systems.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.