Adeel Ashraf Cheema;Muhammad Shahzad Sarfraz;Usman Habib;Qamar Uz Zaman;Ekkarat Boonchieng
{"title":"CD-LLMCARS: Cross Domain Fine-Tuned Large Language Model for Context-Aware Recommender Systems","authors":"Adeel Ashraf Cheema;Muhammad Shahzad Sarfraz;Usman Habib;Qamar Uz Zaman;Ekkarat Boonchieng","doi":"10.1109/OJCS.2024.3509221","DOIUrl":null,"url":null,"abstract":"Recommender systems are essential for providing personalized content across various platforms. However, traditional systems often struggle with limited information, known as the cold start problem, and with accurately interpreting a user's comprehensive preferences, referred to as context. The proposed study, CD-LLMCARS (Cross-Domain fine-tuned Large Language Model for Context-Aware Recommender Systems), presents a novel approach to addressing these issues. CD-LLMCARS leverages the substantial capabilities of the Large Language Model (LLM) Llama 2. Fine-tuning Llama 2 with information from multiple domains can enhance the generation of contextually relevant recommendations that align with a user's preferences in areas such as movies, music, books, and CDs. Techniques such as Low-Rank Adaptation (LoRA) and Half Precision Training (FP16) are both effective and resource-efficient, allowing CD-LLMCARS to perform optimally in cold start scenarios. Extensive testing of CD-LLMCARS indicates outstanding accuracy, particularly in challenging scenarios characterized by limited user history data relevant to the cold start problem. CD-LLMCARS offers precise and pertinent recommendations to users, effectively mitigating the limitations of traditional recommender systems.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"49-59"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10771726","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10771726/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recommender systems are essential for providing personalized content across various platforms. However, traditional systems often struggle with limited information, known as the cold start problem, and with accurately interpreting a user's comprehensive preferences, referred to as context. The proposed study, CD-LLMCARS (Cross-Domain fine-tuned Large Language Model for Context-Aware Recommender Systems), presents a novel approach to addressing these issues. CD-LLMCARS leverages the substantial capabilities of the Large Language Model (LLM) Llama 2. Fine-tuning Llama 2 with information from multiple domains can enhance the generation of contextually relevant recommendations that align with a user's preferences in areas such as movies, music, books, and CDs. Techniques such as Low-Rank Adaptation (LoRA) and Half Precision Training (FP16) are both effective and resource-efficient, allowing CD-LLMCARS to perform optimally in cold start scenarios. Extensive testing of CD-LLMCARS indicates outstanding accuracy, particularly in challenging scenarios characterized by limited user history data relevant to the cold start problem. CD-LLMCARS offers precise and pertinent recommendations to users, effectively mitigating the limitations of traditional recommender systems.