When language and vision meet road safety: Leveraging multimodal large language models for video-based traffic accident analysis

IF 5.7 1区 工程技术 Q1 ERGONOMICS
Ruixuan Zhang , Beichen Wang , Juexiao Zhang , Zilin Bian , Chen Feng , Kaan Ozbay
{"title":"When language and vision meet road safety: Leveraging multimodal large language models for video-based traffic accident analysis","authors":"Ruixuan Zhang ,&nbsp;Beichen Wang ,&nbsp;Juexiao Zhang ,&nbsp;Zilin Bian ,&nbsp;Chen Feng ,&nbsp;Kaan Ozbay","doi":"10.1016/j.aap.2025.108077","DOIUrl":null,"url":null,"abstract":"<div><div>The increasing availability of traffic videos functioning on a 24/7/365 time scale has the great potential of increasing the spatio-temporal coverage of traffic accidents, which will help improve traffic safety. However, analyzing footage from hundreds, if not thousands, of traffic cameras in a 24/7/365 working protocol still remains an extremely challenging task, as current vision-based approaches primarily focus on extracting raw information, such as vehicle trajectories or individual object detection, but require laborious post-processing to derive actionable insights. We propose SeeUnsafe, a new framework that integrates Multimodal Large Language Model (MLLM) agents to transform video-based traffic accident analysis from a traditional extraction-then-explanation workflow to a more interactive, conversational approach. This shist significantly enhances processing throughput by automating complex tasks like video classification and visual grounding, while improving adaptability by enabling seamless adjustments to diverse traffic scenarios and user-defined queries. Our framework employs a severity-based aggregation strategy to handle videos of various lengths and a novel multimodal prompt to generate structured responses for review and evaluation to enable fine-grained visual grounding. We introduce IMS (Information Matching Score), a new MLLM-based metric for aligning structured responses with ground truth. We conduct extensive experiments on the Toyota Woven Traffic Safety dataset, demonstrating that SeeUnsafe effectively performs accident-aware video classification and enables visual grounding by building upon off-the-shelf MLLMs. Our code will be made publicly available upon acceptance.</div></div>","PeriodicalId":6926,"journal":{"name":"Accident; analysis and prevention","volume":"219 ","pages":"Article 108077"},"PeriodicalIF":5.7000,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accident; analysis and prevention","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0001457525001630","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ERGONOMICS","Score":null,"Total":0}
引用次数: 0

Abstract

The increasing availability of traffic videos functioning on a 24/7/365 time scale has the great potential of increasing the spatio-temporal coverage of traffic accidents, which will help improve traffic safety. However, analyzing footage from hundreds, if not thousands, of traffic cameras in a 24/7/365 working protocol still remains an extremely challenging task, as current vision-based approaches primarily focus on extracting raw information, such as vehicle trajectories or individual object detection, but require laborious post-processing to derive actionable insights. We propose SeeUnsafe, a new framework that integrates Multimodal Large Language Model (MLLM) agents to transform video-based traffic accident analysis from a traditional extraction-then-explanation workflow to a more interactive, conversational approach. This shist significantly enhances processing throughput by automating complex tasks like video classification and visual grounding, while improving adaptability by enabling seamless adjustments to diverse traffic scenarios and user-defined queries. Our framework employs a severity-based aggregation strategy to handle videos of various lengths and a novel multimodal prompt to generate structured responses for review and evaluation to enable fine-grained visual grounding. We introduce IMS (Information Matching Score), a new MLLM-based metric for aligning structured responses with ground truth. We conduct extensive experiments on the Toyota Woven Traffic Safety dataset, demonstrating that SeeUnsafe effectively performs accident-aware video classification and enables visual grounding by building upon off-the-shelf MLLMs. Our code will be made publicly available upon acceptance.
当语言和视觉满足道路安全:利用多模式大语言模型进行基于视频的交通事故分析
在24/7/365时间尺度上运行的交通视频的可用性越来越高,具有增加交通事故时空覆盖的巨大潜力,这将有助于改善交通安全。然而,在24/7/365工作协议中分析数百甚至数千个交通摄像头的镜头仍然是一项极具挑战性的任务,因为目前基于视觉的方法主要侧重于提取原始信息,例如车辆轨迹或单个物体检测,但需要繁琐的后处理才能获得可操作的见解。我们提出SeeUnsafe,这是一个集成了多模态大语言模型(MLLM)代理的新框架,将基于视频的交通事故分析从传统的提取-解释工作流转变为更具互动性的对话方法。该系统通过自动化视频分类和视觉接地等复杂任务,显著提高了处理吞吐量,同时通过无缝调整各种流量场景和用户定义查询,提高了适应性。我们的框架采用基于严重性的聚合策略来处理不同长度的视频,并采用新颖的多模态提示来生成结构化的响应以供审查和评估,从而实现细粒度的视觉基础。我们介绍了IMS(信息匹配评分),这是一种新的基于mlm的度量,用于将结构化响应与基本事实相一致。我们在丰田编织交通安全数据集上进行了广泛的实验,证明SeeUnsafe有效地执行事故感知视频分类,并通过构建现成的mllm实现视觉基础。我们的代码将在接受后公开提供。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
11.90
自引率
16.90%
发文量
264
审稿时长
48 days
期刊介绍: Accident Analysis & Prevention provides wide coverage of the general areas relating to accidental injury and damage, including the pre-injury and immediate post-injury phases. Published papers deal with medical, legal, economic, educational, behavioral, theoretical or empirical aspects of transportation accidents, as well as with accidents at other sites. Selected topics within the scope of the Journal may include: studies of human, environmental and vehicular factors influencing the occurrence, type and severity of accidents and injury; the design, implementation and evaluation of countermeasures; biomechanics of impact and human tolerance limits to injury; modelling and statistical analysis of accident data; policy, planning and decision-making in safety.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信