LC-LLM: Explainable lane-change intention and trajectory predictions with Large Language Models

IF 12.5 Q1 TRANSPORTATION
Mingxing Peng , Xusen Guo , Xianda Chen , Kehua Chen , Meixin Zhu , Long Chen , Fei-Yue Wang
{"title":"LC-LLM: Explainable lane-change intention and trajectory predictions with Large Language Models","authors":"Mingxing Peng ,&nbsp;Xusen Guo ,&nbsp;Xianda Chen ,&nbsp;Kehua Chen ,&nbsp;Meixin Zhu ,&nbsp;Long Chen ,&nbsp;Fei-Yue Wang","doi":"10.1016/j.commtr.2025.100170","DOIUrl":null,"url":null,"abstract":"<div><div>To ensure safe driving in dynamic environments, autonomous vehicles should possess the capability to accurately predict lane change intentions of surrounding vehicles in advance and forecast their future trajectories. Existing motion prediction approaches have ample room for improvement, particularly in terms of long-term prediction accuracy and interpretability. In this study, we address these challenges by proposing a Lane Change-Large Language Model (LC-LLM), an explainable lane change prediction model that leverages the strong reasoning capabilities and self explanation abilities of Large Language Models (LLMs). Essentially, we reformulate the lane change prediction task as a language modeling problem, processing heterogeneous driving scenario information as natural language prompts for LLMs and employing supervised fine-tuning to tailor LLMs specifically for lane change prediction task. Additionally, we finetune the Chain-of-Thought (CoT) reasoning to improve prediction transparency and reliability, and include explanatory requirements in the prompts during the inference stage. Therefore, our LC-LLM not only predicts lane change intentions and trajectories but also provides CoT reasoning and explanations for its predictions, enhancing its interpretability. Extensive experiments based on the large-scale highD dataset demonstrate the superior performance and interpretability of our LC-LLM in lane change prediction task. To the best of our knowledge, this is the first attempt to utilize LLMs for predicting lane change behavior. Our study shows that LLMs can effectively encode comprehensive interaction information for understanding driving behavior.</div></div>","PeriodicalId":100292,"journal":{"name":"Communications in Transportation Research","volume":"5 ","pages":"Article 100170"},"PeriodicalIF":12.5000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications in Transportation Research","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772424725000101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION","Score":null,"Total":0}
引用次数: 0

Abstract

To ensure safe driving in dynamic environments, autonomous vehicles should possess the capability to accurately predict lane change intentions of surrounding vehicles in advance and forecast their future trajectories. Existing motion prediction approaches have ample room for improvement, particularly in terms of long-term prediction accuracy and interpretability. In this study, we address these challenges by proposing a Lane Change-Large Language Model (LC-LLM), an explainable lane change prediction model that leverages the strong reasoning capabilities and self explanation abilities of Large Language Models (LLMs). Essentially, we reformulate the lane change prediction task as a language modeling problem, processing heterogeneous driving scenario information as natural language prompts for LLMs and employing supervised fine-tuning to tailor LLMs specifically for lane change prediction task. Additionally, we finetune the Chain-of-Thought (CoT) reasoning to improve prediction transparency and reliability, and include explanatory requirements in the prompts during the inference stage. Therefore, our LC-LLM not only predicts lane change intentions and trajectories but also provides CoT reasoning and explanations for its predictions, enhancing its interpretability. Extensive experiments based on the large-scale highD dataset demonstrate the superior performance and interpretability of our LC-LLM in lane change prediction task. To the best of our knowledge, this is the first attempt to utilize LLMs for predicting lane change behavior. Our study shows that LLMs can effectively encode comprehensive interaction information for understanding driving behavior.
LC-LLM:大语言模型的可解释变道意图和轨迹预测
为了保证动态环境下的安全驾驶,自动驾驶汽车应该具备提前准确预测周围车辆变道意图并预测其未来轨迹的能力。现有的运动预测方法有很大的改进空间,特别是在长期预测精度和可解释性方面。在本研究中,我们提出了一个车道变化大语言模型(LC-LLM)来解决这些挑战,这是一个可解释的车道变化预测模型,利用了大型语言模型(llm)强大的推理能力和自我解释能力。从本质上讲,我们将变道预测任务重新表述为一个语言建模问题,将异构驾驶场景信息处理为llm的自然语言提示,并采用监督微调来定制专门用于变道预测任务的llm。此外,我们对思维链(CoT)推理进行了微调,以提高预测的透明度和可靠性,并在推理阶段的提示中包含解释性要求。因此,我们的LC-LLM不仅可以预测变道意图和轨迹,还可以为其预测提供CoT推理和解释,增强了其可解释性。基于大规模高d数据集的大量实验证明了LC-LLM在车道变化预测任务中的优越性能和可解释性。据我们所知,这是第一次尝试利用llm来预测变道行为。我们的研究表明,llm可以有效地编码全面的交互信息,以理解驾驶行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
15.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信