LingoTrip: Spatiotemporal context prompt driven large language model for individual trip prediction

IF 2 4区 工程技术 Q3 TRANSPORTATION
Zhenlin Qin , Pengfei Zhang , Leizhen Wang , Zhenliang Ma
{"title":"LingoTrip: Spatiotemporal context prompt driven large language model for individual trip prediction","authors":"Zhenlin Qin ,&nbsp;Pengfei Zhang ,&nbsp;Leizhen Wang ,&nbsp;Zhenliang Ma","doi":"10.1016/j.jpubtr.2025.100117","DOIUrl":null,"url":null,"abstract":"<div><div>Large language models (LLMs) showed superior performance in many language-related tasks. It is promising to model the individual mobility prediction problem as a language model and use pretrained LLMs to predict the individual next trip information (e.g., time and location) for personalized travel recommendations. Theoretically, it is expected to overcome the common limitations of data-driven prediction models in zero/few shot learning, generalization, and interpretability. The paper proposes a LingoTrip model for predicting individual next trip location by designing the spatiotemporal context prompts for LLMs. The designed prompting strategies enable LLMs to capture implicit land use information (trip purposes), spatiotemporal mobility patterns (choice preferences), and geographical dependencies of the stations used (choice variability). The lingoTrip is validated using Hong Kong Mass Transit Railway trip data by comparing it with the state-of-the-art data-driven mobility prediction models under different training data sizes. Sensitivity analyses are performed for model hyperparameters and their tuning methods to adapt for other datasets. The results show that LingoTrip outperforms data-driven models in terms of prediction accuracy, transferability (between individuals), zero/few shot learning (limited training sample size) and interpretability of predictions. The LingoTrip model can facilitate the effective provision of personalized information for system crowding and disruption contexts (i.e., proactively providing information to targeted individuals).</div></div>","PeriodicalId":47173,"journal":{"name":"Journal of Public Transportation","volume":"27 ","pages":"Article 100117"},"PeriodicalIF":2.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Public Transportation","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077291X25000025","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TRANSPORTATION","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models (LLMs) showed superior performance in many language-related tasks. It is promising to model the individual mobility prediction problem as a language model and use pretrained LLMs to predict the individual next trip information (e.g., time and location) for personalized travel recommendations. Theoretically, it is expected to overcome the common limitations of data-driven prediction models in zero/few shot learning, generalization, and interpretability. The paper proposes a LingoTrip model for predicting individual next trip location by designing the spatiotemporal context prompts for LLMs. The designed prompting strategies enable LLMs to capture implicit land use information (trip purposes), spatiotemporal mobility patterns (choice preferences), and geographical dependencies of the stations used (choice variability). The lingoTrip is validated using Hong Kong Mass Transit Railway trip data by comparing it with the state-of-the-art data-driven mobility prediction models under different training data sizes. Sensitivity analyses are performed for model hyperparameters and their tuning methods to adapt for other datasets. The results show that LingoTrip outperforms data-driven models in terms of prediction accuracy, transferability (between individuals), zero/few shot learning (limited training sample size) and interpretability of predictions. The LingoTrip model can facilitate the effective provision of personalized information for system crowding and disruption contexts (i.e., proactively providing information to targeted individuals).
LingoTrip:时空语境提示驱动的大型语言模型,用于个人旅行预测
大型语言模型(llm)在许多与语言相关的任务中表现出优越的性能。将个人移动性预测问题建模为语言模型,并使用预训练的llm来预测个人下一次旅行信息(例如时间和地点)以进行个性化旅行推荐,这是有希望的。理论上,它有望克服数据驱动预测模型在零/少镜头学习、泛化和可解释性方面的常见局限性。本文通过为llm设计时空上下文提示,提出了一个用于预测个人下一次旅行位置的LingoTrip模型。设计的提示策略使llm能够捕获隐含的土地利用信息(旅行目的)、时空流动模式(选择偏好)和所使用站点的地理依赖性(选择可变性)。lingoTrip使用香港地下铁路的出行数据,与最新的数据驱动的出行预测模型在不同的训练数据规模下进行了比较。对模型超参数及其调优方法进行了敏感性分析,以适应其他数据集。结果表明,LingoTrip在预测精度、可转移性(个体之间)、零/几次学习(有限的训练样本量)和预测的可解释性方面优于数据驱动模型。LingoTrip模型可以为系统拥挤和中断环境提供有效的个性化信息(即主动向目标个体提供信息)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.40
自引率
0.00%
发文量
29
审稿时长
26 days
期刊介绍: The Journal of Public Transportation, affiliated with the Center for Urban Transportation Research, is an international peer-reviewed open access journal focused on various forms of public transportation. It publishes original research from diverse academic disciplines, including engineering, economics, planning, and policy, emphasizing innovative solutions to transportation challenges. Content covers mobility services available to the general public, such as line-based services and shared fleets, offering insights beneficial to passengers, agencies, service providers, and communities.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信