Fan Yang, Yong Yue, Gangmin Li, Terry R. Payne, Ka Lok Man
{"title":"Chain-of-thought prompting empowered generative user modeling for personalized recommendation","authors":"Fan Yang, Yong Yue, Gangmin Li, Terry R. Payne, Ka Lok Man","doi":"10.1007/s00521-024-10364-2","DOIUrl":null,"url":null,"abstract":"<p>Personalized recommendation plays a crucial role in Internet platforms, providing users with tailored content based on their user models and enhancing user satisfaction and experience. To address the challenge of information overload, it is essential to analyze user needs comprehensively, considering historical behavior and interests and the user's intentions and profiles. Previous user modeling approaches for personalized recommendations have exhibited certain limitations, relying primarily on historical behavior data to infer user preferences, which results in challenges such as the cold-start problem, incomplete modeling, and limited explanation. Motivated by recent advancements in large language models (LLMs), we present a novel approach to user modeling by embracing generative user modeling using LLMs. We propose generative user modeling with chain-of-thought prompting for personalized recommendation, which utilizes LLMs to generate comprehensive and accurate user models expressed in natural language and then employs these user models to empower LLMs for personalized recommendation. Specifically, we adopt the chain-of-thought prompting method to reason about user attributes, subjective preferences, and intentions, integrating them into a holistic user model. Subsequently, we utilize the generated user models as input to LLMs and design a collection of prompts to align the LLMs with various recommendation tasks, encompassing rating prediction, sequential recommendation, direct recommendation, and explanation generation. Extensive experiments conducted on real-world datasets demonstrate the immense potential of large language models in generating natural language user models, and the adoption of generative user modeling significantly enhances the performance of LLMs across the four recommendation tasks. Our code and dataset can be found at https://github.com/kwyyangfan/GUMRec.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computing and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00521-024-10364-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Personalized recommendation plays a crucial role in Internet platforms, providing users with tailored content based on their user models and enhancing user satisfaction and experience. To address the challenge of information overload, it is essential to analyze user needs comprehensively, considering historical behavior and interests and the user's intentions and profiles. Previous user modeling approaches for personalized recommendations have exhibited certain limitations, relying primarily on historical behavior data to infer user preferences, which results in challenges such as the cold-start problem, incomplete modeling, and limited explanation. Motivated by recent advancements in large language models (LLMs), we present a novel approach to user modeling by embracing generative user modeling using LLMs. We propose generative user modeling with chain-of-thought prompting for personalized recommendation, which utilizes LLMs to generate comprehensive and accurate user models expressed in natural language and then employs these user models to empower LLMs for personalized recommendation. Specifically, we adopt the chain-of-thought prompting method to reason about user attributes, subjective preferences, and intentions, integrating them into a holistic user model. Subsequently, we utilize the generated user models as input to LLMs and design a collection of prompts to align the LLMs with various recommendation tasks, encompassing rating prediction, sequential recommendation, direct recommendation, and explanation generation. Extensive experiments conducted on real-world datasets demonstrate the immense potential of large language models in generating natural language user models, and the adoption of generative user modeling significantly enhances the performance of LLMs across the four recommendation tasks. Our code and dataset can be found at https://github.com/kwyyangfan/GUMRec.