Linsey Pang, Amir Hossein Raffiee, Wei Liu, Keld Lundgaard
{"title":"Sequential Recommendation via Adaptive Robust Attention with Multi-dimensional Embeddings","authors":"Linsey Pang, Amir Hossein Raffiee, Wei Liu, Keld Lundgaard","doi":"arxiv-2409.05022","DOIUrl":"https://doi.org/arxiv-2409.05022","url":null,"abstract":"Sequential recommendation models have achieved state-of-the-art performance\u0000using self-attention mechanism. It has since been found that moving beyond only\u0000using item ID and positional embeddings leads to a significant accuracy boost\u0000when predicting the next item. In recent literature, it was reported that a\u0000multi-dimensional kernel embedding with temporal contextual kernels to capture\u0000users' diverse behavioral patterns results in a substantial performance\u0000improvement. In this study, we further improve the sequential recommender\u0000model's robustness and generalization by introducing a mix-attention mechanism\u0000with a layer-wise noise injection (LNI) regularization. We refer to our\u0000proposed model as adaptive robust sequential recommendation framework (ADRRec),\u0000and demonstrate through extensive experiments that our model outperforms\u0000existing self-attention architectures.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Survey on Diffusion Models for Recommender Systems","authors":"Jianghao Lin, Jiaqi Liu, Jiachen Zhu, Yunjia Xi, Chengkai Liu, Yangtian Zhang, Yong Yu, Weinan Zhang","doi":"arxiv-2409.05033","DOIUrl":"https://doi.org/arxiv-2409.05033","url":null,"abstract":"While traditional recommendation techniques have made significant strides in\u0000the past decades, they still suffer from limited generalization performance\u0000caused by factors like inadequate collaborative signals, weak latent\u0000representations, and noisy data. In response, diffusion models (DMs) have\u0000emerged as promising solutions for recommender systems due to their robust\u0000generative capabilities, solid theoretical foundations, and improved training\u0000stability. To this end, in this paper, we present the first comprehensive\u0000survey on diffusion models for recommendation, and draw a bird's-eye view from\u0000the perspective of the whole pipeline in real-world recommender systems. We\u0000systematically categorize existing research works into three primary domains:\u0000(1) diffusion for data engineering & encoding, focusing on data augmentation\u0000and representation enhancement; (2) diffusion as recommender models, employing\u0000diffusion models to directly estimate user preferences and rank items; and (3)\u0000diffusion for content presentation, utilizing diffusion models to generate\u0000personalized content such as fashion and advertisement creatives. Our taxonomy\u0000highlights the unique strengths of diffusion models in capturing complex data\u0000distributions and generating high-quality, diverse samples that closely align\u0000with user preferences. We also summarize the core characteristics of the\u0000adapting diffusion models for recommendation, and further identify key areas\u0000for future exploration, which helps establish a roadmap for researchers and\u0000practitioners seeking to advance recommender systems through the innovative\u0000application of diffusion models. To further facilitate the research community\u0000of recommender systems based on diffusion models, we actively maintain a GitHub\u0000repository for papers and other related resources in this rising direction\u0000https://github.com/CHIANGEL/Awesome-Diffusion-for-RecSys.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jintian Zhang, Cheng Peng, Mengshu Sun, Xiang Chen, Lei Liang, Zhiqiang Zhang, Jun Zhou, Huajun Chen, Ningyu Zhang
{"title":"OneGen: Efficient One-Pass Unified Generation and Retrieval for LLMs","authors":"Jintian Zhang, Cheng Peng, Mengshu Sun, Xiang Chen, Lei Liang, Zhiqiang Zhang, Jun Zhou, Huajun Chen, Ningyu Zhang","doi":"arxiv-2409.05152","DOIUrl":"https://doi.org/arxiv-2409.05152","url":null,"abstract":"Despite the recent advancements in Large Language Models (LLMs), which have\u0000significantly enhanced the generative capabilities for various NLP tasks, LLMs\u0000still face limitations in directly handling retrieval tasks. However, many\u0000practical applications demand the seamless integration of both retrieval and\u0000generation. This paper introduces a novel and efficient One-pass Generation and\u0000retrieval framework (OneGen), designed to improve LLMs' performance on tasks\u0000that require both generation and retrieval. The proposed framework bridges the\u0000traditionally separate training approaches for generation and retrieval by\u0000incorporating retrieval tokens generated autoregressively. This enables a\u0000single LLM to handle both tasks simultaneously in a unified forward pass. We\u0000conduct experiments on two distinct types of composite tasks, RAG and Entity\u0000Linking, to validate the pluggability, effectiveness, and efficiency of OneGen\u0000in training and inference. Furthermore, our results show that integrating\u0000generation and retrieval within the same context preserves the generative\u0000capabilities of LLMs while improving retrieval performance. To the best of our\u0000knowledge, OneGen is the first to enable LLMs to conduct vector retrieval\u0000during the generation.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Late Chunking: Contextual Chunk Embeddings Using Long-Context Embedding Models","authors":"Michael Günther, Isabelle Mohr, Bo Wang, Han Xiao","doi":"arxiv-2409.04701","DOIUrl":"https://doi.org/arxiv-2409.04701","url":null,"abstract":"Many use cases require retrieving smaller portions of text, and dense\u0000vector-based retrieval systems often perform better with shorter text segments,\u0000as the semantics are less likely to be \"over-compressed\" in the embeddings.\u0000Consequently, practitioners often split text documents into smaller chunks and\u0000encode them separately. However, chunk embeddings created in this way can lose\u0000contextual information from surrounding chunks, resulting in suboptimal\u0000representations. In this paper, we introduce a novel method called \"late\u0000chunking,\" which leverages long context embedding models to first embed all\u0000tokens of the long text, with chunking applied after the transformer model and\u0000just before mean pooling. The resulting chunk embeddings capture the full\u0000contextual information, leading to superior results across various retrieval\u0000tasks without the need for additional training. Moreover, our method is generic\u0000enough to be applied to any long-context embedding model.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Debias Can be Unreliable: Mitigating Bias Issue in Evaluating Debiasing Recommendation","authors":"Chengbing Wang, Wentao Shi, Jizhi Zhang, Wenjie Wang, Hang Pan, Fuli Feng","doi":"arxiv-2409.04810","DOIUrl":"https://doi.org/arxiv-2409.04810","url":null,"abstract":"Recent work has improved recommendation models remarkably by equipping them\u0000with debiasing methods. Due to the unavailability of fully-exposed datasets,\u0000most existing approaches resort to randomly-exposed datasets as a proxy for\u0000evaluating debiased models, employing traditional evaluation scheme to\u0000represent the recommendation performance. However, in this study, we reveal\u0000that traditional evaluation scheme is not suitable for randomly-exposed\u0000datasets, leading to inconsistency between the Recall performance obtained\u0000using randomly-exposed datasets and that obtained using fully-exposed datasets.\u0000Such inconsistency indicates the potential unreliability of experiment\u0000conclusions on previous debiasing techniques and calls for unbiased Recall\u0000evaluation using randomly-exposed datasets. To bridge the gap, we propose the\u0000Unbiased Recall Evaluation (URE) scheme, which adjusts the utilization of\u0000randomly-exposed datasets to unbiasedly estimate the true Recall performance on\u0000fully-exposed datasets. We provide theoretical evidence to demonstrate the\u0000rationality of URE and perform extensive experiments on real-world datasets to\u0000validate its soundness.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hemanth Kandula, Damianos Karakos, Haoling Qiu, Benjamin Rozonoyer, Ian Soboroff, Lee Tarlin, Bonan Min
{"title":"QueryBuilder: Human-in-the-Loop Query Development for Information Retrieval","authors":"Hemanth Kandula, Damianos Karakos, Haoling Qiu, Benjamin Rozonoyer, Ian Soboroff, Lee Tarlin, Bonan Min","doi":"arxiv-2409.04667","DOIUrl":"https://doi.org/arxiv-2409.04667","url":null,"abstract":"Frequently, users of an Information Retrieval (IR) system start with an\u0000overarching information need (a.k.a., an analytic task) and proceed to define\u0000finer-grained queries covering various important aspects (i.e., sub-topics) of\u0000that analytic task. We present a novel, interactive system called\u0000$textit{QueryBuilder}$, which allows a novice, English-speaking user to create\u0000queries with a small amount of effort, through efficient exploration of an\u0000English development corpus in order to rapidly develop cross-lingual\u0000information retrieval queries corresponding to the user's information needs.\u0000QueryBuilder performs near real-time retrieval of documents based on\u0000user-entered search terms; the user looks through the retrieved documents and\u0000marks sentences as relevant to the information needed. The marked sentences are\u0000used by the system as additional information in query formation and refinement:\u0000query terms (and, optionally, event features, which capture event $'triggers'$\u0000(indicator terms) and agent/patient roles) are appropriately weighted, and a\u0000neural-based system, which better captures textual meaning, retrieves other\u0000relevant content. The process of retrieval and marking is repeated as many\u0000times as desired, giving rise to increasingly refined queries in each\u0000iteration. The final product is a fine-grained query used in Cross-Lingual\u0000Information Retrieval (CLIR). Our experiments using analytic tasks and requests\u0000from the IARPA BETTER IR datasets show that with a small amount of effort (at\u0000most 10 minutes per sub-topic), novice users can form $textit{useful}$\u0000fine-grained queries including in languages they don't understand. QueryBuilder\u0000also provides beneficial capabilities to the traditional corpus exploration and\u0000query formation process. A demonstration video is released at\u0000https://vimeo.com/734795835","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Incorporate LLMs with Influential Recommender System","authors":"Mingze Wang, Shuxian Bi, Wenjie Wang, Chongming Gao, Yangyang Li, Fuli Feng","doi":"arxiv-2409.04827","DOIUrl":"https://doi.org/arxiv-2409.04827","url":null,"abstract":"Recommender systems have achieved increasing accuracy over the years.\u0000However, this precision often leads users to narrow their interests, resulting\u0000in issues such as limited diversity and the creation of echo chambers. Current\u0000research addresses these challenges through proactive recommender systems by\u0000recommending a sequence of items (called influence path) to guide user interest\u0000in the target item. However, existing methods struggle to construct a coherent\u0000influence path that builds up with items the user is likely to enjoy. In this\u0000paper, we leverage the Large Language Model's (LLMs) exceptional ability for\u0000path planning and instruction following, introducing a novel approach named\u0000LLM-based Influence Path Planning (LLM-IPP). Our approach maintains coherence\u0000between consecutive recommendations and enhances user acceptability of the\u0000recommended items. To evaluate LLM-IPP, we implement various user simulators\u0000and metrics to measure user acceptability and path coherence. Experimental\u0000results demonstrate that LLM-IPP significantly outperforms traditional\u0000proactive recommender systems. This study pioneers the integration of LLMs into\u0000proactive recommender systems, offering a reliable and user-engaging\u0000methodology for future recommendation technologies.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Unified Framework for Cross-Domain Recommendation","authors":"Jiangxia Cao, Shen Wang, Gaode Chen, Rui Huang, Shuang Yang, Zhaojie Liu, Guorui Zhou","doi":"arxiv-2409.04540","DOIUrl":"https://doi.org/arxiv-2409.04540","url":null,"abstract":"In addressing the persistent challenges of data-sparsity and cold-start\u0000issues in domain-expert recommender systems, Cross-Domain Recommendation (CDR)\u0000emerges as a promising methodology. CDR aims at enhancing prediction\u0000performance in the target domain by leveraging interaction knowledge from\u0000related source domains, particularly through users or items that span across\u0000multiple domains (e.g., Short-Video and Living-Room). For academic research\u0000purposes, there are a number of distinct aspects to guide CDR method designing,\u0000including the auxiliary domain number, domain-overlapped element, user-item\u0000interaction types, and downstream tasks. With so many different CDR combination\u0000scenario settings, the proposed scenario-expert approaches are tailored to\u0000address a specific vertical CDR scenario, and often lack the capacity to adapt\u0000to multiple horizontal scenarios. In an effect to coherently adapt to various\u0000scenarios, and drawing inspiration from the concept of domain-invariant\u0000transfer learning, we extend the former SOTA model UniCDR in five different\u0000aspects, named as UniCDR+. Our work was successfully deployed on the Kuaishou\u0000Living-Room RecSys.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angelo Salatino, Tanay Aggarwal, Andrea Mannocci, Francesco Osborne, Enrico Motta
{"title":"A Survey on Knowledge Organization Systems of Research Fields: Resources and Challenges","authors":"Angelo Salatino, Tanay Aggarwal, Andrea Mannocci, Francesco Osborne, Enrico Motta","doi":"arxiv-2409.04432","DOIUrl":"https://doi.org/arxiv-2409.04432","url":null,"abstract":"Knowledge Organization Systems (KOSs), such as term lists, thesauri,\u0000taxonomies, and ontologies, play a fundamental role in categorising, managing,\u0000and retrieving information. In the academic domain, KOSs are often adopted for\u0000representing research areas and their relationships, primarily aiming to\u0000classify research articles, academic courses, patents, books, scientific\u0000venues, domain experts, grants, software, experiment materials, and several\u0000other relevant products and agents. These structured representations of\u0000research areas, widely embraced by many academic fields, have proven effective\u0000in empowering AI-based systems to i) enhance retrievability of relevant\u0000documents, ii) enable advanced analytic solutions to quantify the impact of\u0000academic research, and iii) analyse and forecast research dynamics. This paper\u0000aims to present a comprehensive survey of the current KOS for academic\u0000disciplines. We analysed and compared 45 KOSs according to five main\u0000dimensions: scope, structure, curation, usage, and links to other KOSs. Our\u0000results reveal a very heterogeneous scenario in terms of scope, scale, quality,\u0000and usage, highlighting the need for more integrated solutions for representing\u0000research knowledge across academic fields. We conclude by discussing the main\u0000challenges and the most promising future directions.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Davide Abbattista, Vito Walter Anelli, Tommaso Di Noia, Craig Macdonald, Aleksandr Vladimirovich Petrov
{"title":"Enhancing Sequential Music Recommendation with Personalized Popularity Awareness","authors":"Davide Abbattista, Vito Walter Anelli, Tommaso Di Noia, Craig Macdonald, Aleksandr Vladimirovich Petrov","doi":"arxiv-2409.04329","DOIUrl":"https://doi.org/arxiv-2409.04329","url":null,"abstract":"In the realm of music recommendation, sequential recommender systems have\u0000shown promise in capturing the dynamic nature of music consumption.\u0000Nevertheless, traditional Transformer-based models, such as SASRec and\u0000BERT4Rec, while effective, encounter challenges due to the unique\u0000characteristics of music listening habits. In fact, existing models struggle to\u0000create a coherent listening experience due to rapidly evolving preferences.\u0000Moreover, music consumption is characterized by a prevalence of repeated\u0000listening, i.e., users frequently return to their favourite tracks, an\u0000important signal that could be framed as individual or personalized popularity. This paper addresses these challenges by introducing a novel approach that\u0000incorporates personalized popularity information into sequential\u0000recommendation. By combining user-item popularity scores with model-generated\u0000scores, our method effectively balances the exploration of new music with the\u0000satisfaction of user preferences. Experimental results demonstrate that a\u0000Personalized Most Popular recommender, a method solely based on user-specific\u0000popularity, outperforms existing state-of-the-art models. Furthermore,\u0000augmenting Transformer-based models with personalized popularity awareness\u0000yields superior performance, showing improvements ranging from 25.2% to 69.8%.\u0000The code for this paper is available at\u0000https://github.com/sisinflab/personalized-popularity-awareness.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}