BERT-Trip:使用注意对比学习的有效和可扩展的旅行表征

Ai-Te Kuo, Haiquan Chen, Wei-Shinn Ku
{"title":"BERT-Trip:使用注意对比学习的有效和可扩展的旅行表征","authors":"Ai-Te Kuo, Haiquan Chen, Wei-Shinn Ku","doi":"10.1109/ICDE55515.2023.00053","DOIUrl":null,"url":null,"abstract":"Trip recommendation has drawn considerable attention over the past decade. In trip recommendation, a sequence of point-of-interests (POIs) are recommended for a given query which includes an origin and a destination. Recently the emergence of the attention mechanism and many attention-incorporated models have achieved great success in various fields. Trip recommendation problems demonstrate similar characteristics that can potentially benefit from the attention mechanism. However, applying the attention mechanism for trip recommendation is non-trivial. We are motivated to answer the following two research questions. (1) How can we learn trip representation effectively without labels? Unlike most of the natural language processing tasks, there are no ground-truth labels available for trip recommendation. (2) How can we learn trip representation effectively without handcrafting negative samples? In this paper, we cast the trip representation learning into a natural language processing (NLP) task. We propose BERT-Trip, a self-supervised contrast learning framework, to learn effective and scalable trip representation in support of time-sensitive and user-personalized trip recommendation. BERT-Trip builds on a Siamese network to maximize the similarity between the augmentations of trips with BERT as the backbone encoder. We utilize the masking strategy for generating augmented views (positive sample pairs) of trips in the Siamese network and employ the stop-gradient on one side of the Siamese network to eliminate the need to use any negative sample pairs or momentum encoders. Extensive experiments on real-world datasets demonstrate that BERT-Trip consistently outperformed the state-of-the-art methods in terms of all effectiveness metrics. Compared with the state-of-the-art methods, BERT-Trip is able to yield up to 24 percent and 40 percent increases in F1 score on the Flickr and the Weeplaces datasets, respectively. A rigorous performance evaluation of BERT-Trip on scalability up to 12800 POIs is also provided.","PeriodicalId":434744,"journal":{"name":"2023 IEEE 39th International Conference on Data Engineering (ICDE)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BERT-Trip: Effective and Scalable Trip Representation using Attentive Contrast Learning\",\"authors\":\"Ai-Te Kuo, Haiquan Chen, Wei-Shinn Ku\",\"doi\":\"10.1109/ICDE55515.2023.00053\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Trip recommendation has drawn considerable attention over the past decade. In trip recommendation, a sequence of point-of-interests (POIs) are recommended for a given query which includes an origin and a destination. Recently the emergence of the attention mechanism and many attention-incorporated models have achieved great success in various fields. Trip recommendation problems demonstrate similar characteristics that can potentially benefit from the attention mechanism. However, applying the attention mechanism for trip recommendation is non-trivial. We are motivated to answer the following two research questions. (1) How can we learn trip representation effectively without labels? Unlike most of the natural language processing tasks, there are no ground-truth labels available for trip recommendation. (2) How can we learn trip representation effectively without handcrafting negative samples? In this paper, we cast the trip representation learning into a natural language processing (NLP) task. We propose BERT-Trip, a self-supervised contrast learning framework, to learn effective and scalable trip representation in support of time-sensitive and user-personalized trip recommendation. BERT-Trip builds on a Siamese network to maximize the similarity between the augmentations of trips with BERT as the backbone encoder. We utilize the masking strategy for generating augmented views (positive sample pairs) of trips in the Siamese network and employ the stop-gradient on one side of the Siamese network to eliminate the need to use any negative sample pairs or momentum encoders. Extensive experiments on real-world datasets demonstrate that BERT-Trip consistently outperformed the state-of-the-art methods in terms of all effectiveness metrics. Compared with the state-of-the-art methods, BERT-Trip is able to yield up to 24 percent and 40 percent increases in F1 score on the Flickr and the Weeplaces datasets, respectively. A rigorous performance evaluation of BERT-Trip on scalability up to 12800 POIs is also provided.\",\"PeriodicalId\":434744,\"journal\":{\"name\":\"2023 IEEE 39th International Conference on Data Engineering (ICDE)\",\"volume\":\"84 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 39th International Conference on Data Engineering (ICDE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDE55515.2023.00053\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 39th International Conference on Data Engineering (ICDE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDE55515.2023.00053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在过去的十年里,旅游推荐引起了相当大的关注。在旅行推荐中,为给定的查询推荐一系列兴趣点(poi),其中包括起点和目的地。近年来,注意机制和许多注意融合模型的出现在各个领域取得了巨大的成功。旅行推荐问题展示了类似的特征,可以从注意机制中获益。然而,将注意力机制应用于旅行推荐并非易事。我们的动机是回答以下两个研究问题。(1)如何在没有标签的情况下有效地学习旅行表征?与大多数自然语言处理任务不同,没有可用于旅行推荐的基本事实标签。(2)如何在不手工制作负样本的情况下有效地学习旅行表征?在本文中,我们将旅行表征学习转化为自然语言处理(NLP)任务。我们提出了BERT-Trip,一个自我监督的对比学习框架,来学习有效和可扩展的旅行表示,以支持时间敏感和用户个性化的旅行推荐。BERT- trip建立在一个Siamese网络上,以最大限度地提高以BERT为骨干编码器的旅行增强之间的相似性。我们利用掩蔽策略来生成Siamese网络中行程的增强视图(正样本对),并在Siamese网络的一侧使用停止梯度来消除使用任何负样本对或动量编码器的需要。在真实数据集上进行的大量实验表明,BERT-Trip在所有有效性指标方面始终优于最先进的方法。与最先进的方法相比,BERT-Trip在Flickr和Weeplaces数据集上的F1分数分别提高了24%和40%。对BERT-Trip的可扩展性进行了严格的性能评估,最高可达12800个poi。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
BERT-Trip: Effective and Scalable Trip Representation using Attentive Contrast Learning
Trip recommendation has drawn considerable attention over the past decade. In trip recommendation, a sequence of point-of-interests (POIs) are recommended for a given query which includes an origin and a destination. Recently the emergence of the attention mechanism and many attention-incorporated models have achieved great success in various fields. Trip recommendation problems demonstrate similar characteristics that can potentially benefit from the attention mechanism. However, applying the attention mechanism for trip recommendation is non-trivial. We are motivated to answer the following two research questions. (1) How can we learn trip representation effectively without labels? Unlike most of the natural language processing tasks, there are no ground-truth labels available for trip recommendation. (2) How can we learn trip representation effectively without handcrafting negative samples? In this paper, we cast the trip representation learning into a natural language processing (NLP) task. We propose BERT-Trip, a self-supervised contrast learning framework, to learn effective and scalable trip representation in support of time-sensitive and user-personalized trip recommendation. BERT-Trip builds on a Siamese network to maximize the similarity between the augmentations of trips with BERT as the backbone encoder. We utilize the masking strategy for generating augmented views (positive sample pairs) of trips in the Siamese network and employ the stop-gradient on one side of the Siamese network to eliminate the need to use any negative sample pairs or momentum encoders. Extensive experiments on real-world datasets demonstrate that BERT-Trip consistently outperformed the state-of-the-art methods in terms of all effectiveness metrics. Compared with the state-of-the-art methods, BERT-Trip is able to yield up to 24 percent and 40 percent increases in F1 score on the Flickr and the Weeplaces datasets, respectively. A rigorous performance evaluation of BERT-Trip on scalability up to 12800 POIs is also provided.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信