Keywords-enhanced Contrastive Learning Model for travel recommendation

IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Lei Chen , Guixiang Zhu , Weichao Liang , Jie Cao , Yihan Chen
{"title":"Keywords-enhanced Contrastive Learning Model for travel recommendation","authors":"Lei Chen ,&nbsp;Guixiang Zhu ,&nbsp;Weichao Liang ,&nbsp;Jie Cao ,&nbsp;Yihan Chen","doi":"10.1016/j.ipm.2024.103874","DOIUrl":null,"url":null,"abstract":"<div><p>Travel recommendation aims to infer travel intentions of users by analyzing their historical behaviors on Online Travel Agencies (OTAs). However, crucial keywords in clicked travel product titles, such as destination and itinerary duration, indicating tourists’ intentions, are often overlooked. Additionally, most previous studies only consider stable long-term user interests or temporary short-term user preferences, making the recommendation performance unreliable. To mitigate these constraints, this paper proposes a novel <strong>K</strong>eywords-enhanced <strong>C</strong>ontrastive <strong>L</strong>earning <strong>M</strong>odel (KCLM). KCLM simultaneously implements personalized travel recommendation and keywords generation tasks, integrating long-term and short-term user preferences within both tasks. Furthermore, we design two kinds of contrastive learning tasks for better user and travel product representation learning. The preference contrastive learning aims to bridge the gap between long-term and short-term user preferences. The multi-view contrastive learning focuses on modeling the coarse-grained commonality between clicked products and their keywords. Extensive experiments are conducted on two tourism datasets and a large-scale e-commerce dataset. The experimental results demonstrate that KCLM achieves substantial gains in both metrics compared to the best-performing baseline methods. Specifically, HR@20 improved by 5.79%–14.13%, MRR@20 improved by 6.57%–18.50%. Furthermore, to have an intuitive understanding of the keyword generation by the KCLM model, we provide a case study for several randomized examples.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4000,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457324002334","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Travel recommendation aims to infer travel intentions of users by analyzing their historical behaviors on Online Travel Agencies (OTAs). However, crucial keywords in clicked travel product titles, such as destination and itinerary duration, indicating tourists’ intentions, are often overlooked. Additionally, most previous studies only consider stable long-term user interests or temporary short-term user preferences, making the recommendation performance unreliable. To mitigate these constraints, this paper proposes a novel Keywords-enhanced Contrastive Learning Model (KCLM). KCLM simultaneously implements personalized travel recommendation and keywords generation tasks, integrating long-term and short-term user preferences within both tasks. Furthermore, we design two kinds of contrastive learning tasks for better user and travel product representation learning. The preference contrastive learning aims to bridge the gap between long-term and short-term user preferences. The multi-view contrastive learning focuses on modeling the coarse-grained commonality between clicked products and their keywords. Extensive experiments are conducted on two tourism datasets and a large-scale e-commerce dataset. The experimental results demonstrate that KCLM achieves substantial gains in both metrics compared to the best-performing baseline methods. Specifically, HR@20 improved by 5.79%–14.13%, MRR@20 improved by 6.57%–18.50%. Furthermore, to have an intuitive understanding of the keyword generation by the KCLM model, we provide a case study for several randomized examples.

用于旅行推荐的关键词增强型对比学习模型
旅游推荐旨在通过分析用户在在线旅行社(OTA)上的历史行为来推断其旅游意图。然而,点击旅游产品标题中的关键字,如目的地和行程时间,往往被忽视,而这些关键字表明了游客的意图。此外,之前的大多数研究只考虑了稳定的长期用户兴趣或临时的短期用户偏好,使得推荐性能不可靠。为了缓解这些制约因素,本文提出了一种新颖的关键词增强对比学习模型(KCLM)。KCLM 同时执行个性化旅游推荐和关键词生成任务,在这两个任务中整合了用户的长期和短期偏好。此外,我们还设计了两种对比学习任务,以实现更好的用户和旅游产品表征学习。偏好对比学习旨在缩小用户长期和短期偏好之间的差距。多视角对比学习侧重于对点击产品及其关键词之间的粗粒度共性进行建模。我们在两个旅游数据集和一个大型电子商务数据集上进行了广泛的实验。实验结果表明,与表现最好的基线方法相比,KCLM 在两个指标上都取得了大幅提升。具体来说,HR@20 提高了 5.79%-14.13%,MRR@20 提高了 6.57%-18.50%。此外,为了直观地了解 KCLM 模型生成关键词的过程,我们提供了几个随机例子的案例研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Processing & Management
Information Processing & Management 工程技术-计算机:信息系统
CiteScore
17.00
自引率
11.60%
发文量
276
审稿时长
39 days
期刊介绍: Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing. We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信