{"title":"An actor-critic based recommender system with context-aware user modeling","authors":"Maryam Bukhari, Muazzam Maqsood, Farhan Adil","doi":"10.1007/s10462-025-11134-9","DOIUrl":null,"url":null,"abstract":"<div><p>Recommendation systems empower users with tailored service assistance by learning about their interactions with systems and recommending items based on their preferences and interests. Typical recommender systems view the recommendation process as a static procedure disregarding the fact that users’ preferences are changed over time. Reinforcement learning (RL) approaches are the most advanced and recent techniques used by researchers to handle challenges where the user’s interest is captured by their most recent interactions with the system. However, most of the recent research on RL-based recommender systems focuses on simply the user’s recent interactions to generate the recommendations without taking into account the context of the user in which these interactions occur. The context has a great impact on users’ interests, behaviors, and ratings e.g., user mood, time, day type, companion, social circle, and location. In this paper, we propose a context-aware deep reinforcement learning-based recommender system focusing on context-specific state modeling methods. In this approach, states are designed based on the user’s most recent context. In parallel, a list-wise version of the context-aware recommender agent is also proposed, in which a list of items is recommended to users at each step of interaction based on their context. The findings of the study indicate that modeling users’ preferences in combination with contextual variables improves the performance of RL-based recommender systems. Furthermore, we evaluate the proposed method on context-based datasets in an offline environment. The performance in terms of evaluation measures optimally indicates the worth of the proposed method in comparison with existing studies. More precisely, the highest Presicion@5, MAP@10, and NDCG@10 of the context-aware recommender agent are 77%, 76%, and 74% respectively.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 5","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11134-9.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11134-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recommendation systems empower users with tailored service assistance by learning about their interactions with systems and recommending items based on their preferences and interests. Typical recommender systems view the recommendation process as a static procedure disregarding the fact that users’ preferences are changed over time. Reinforcement learning (RL) approaches are the most advanced and recent techniques used by researchers to handle challenges where the user’s interest is captured by their most recent interactions with the system. However, most of the recent research on RL-based recommender systems focuses on simply the user’s recent interactions to generate the recommendations without taking into account the context of the user in which these interactions occur. The context has a great impact on users’ interests, behaviors, and ratings e.g., user mood, time, day type, companion, social circle, and location. In this paper, we propose a context-aware deep reinforcement learning-based recommender system focusing on context-specific state modeling methods. In this approach, states are designed based on the user’s most recent context. In parallel, a list-wise version of the context-aware recommender agent is also proposed, in which a list of items is recommended to users at each step of interaction based on their context. The findings of the study indicate that modeling users’ preferences in combination with contextual variables improves the performance of RL-based recommender systems. Furthermore, we evaluate the proposed method on context-based datasets in an offline environment. The performance in terms of evaluation measures optimally indicates the worth of the proposed method in comparison with existing studies. More precisely, the highest Presicion@5, MAP@10, and NDCG@10 of the context-aware recommender agent are 77%, 76%, and 74% respectively.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.