Je-Seok Ham , Jia Huang , Peng Jiang , Jinyoung Moon , Yongjin Kwon , Srikanth Saripalli , Changick Kim
{"title":"Multimodal understanding with GPT-4o to enhance generalizable pedestrian behavior prediction","authors":"Je-Seok Ham , Jia Huang , Peng Jiang , Jinyoung Moon , Yongjin Kwon , Srikanth Saripalli , Changick Kim","doi":"10.1016/j.compeleceng.2025.110741","DOIUrl":null,"url":null,"abstract":"<div><div>Pedestrian behavior prediction is one of the most critical tasks in urban driving scenarios, playing a key role in ensuring road safety. Traditional learning-based methods have relied on vision models for pedestrian behavior prediction. However, fully understanding pedestrians’ behaviors in advance is very challenging due to the complex driving environments and the multifaceted interactions between pedestrians and road elements. Additionally, these methods often show a limited understanding of driving environments not included in the training. The emergence of Multimodal Large Language Models (MLLMs) provides an innovative approach to addressing these challenges through advanced reasoning capabilities. This paper presents OmniPredict, the first study to apply GPT-4o(mni), a state-of-the-art MLLM, for pedestrian behavior prediction in urban driving scenarios. We assessed the model using the JAAD and WiDEVIEW datasets, which are widely used for pedestrian behavior analysis. Our method utilized multiple contextual modalities and achieved 67% accuracy in a zero-shot setting without any task-specific training, surpassing the performance of the latest MLLM baselines by 10%. Furthermore, when incorporating additional contextual information, the experimental results demonstrated a significant increase in prediction accuracy across four behavior types (crossing, occlusion, action, and look). We also validated the model s generalization ability by comparing its responses across various road environment scenarios. OmniPredict exhibits strong generalization capabilities, demonstrating robust decision-making in diverse and unseen driving rare scenarios. These findings highlight the potential of MLLMs to enhance pedestrian behavior prediction, paving the way for safer and more informed decision-making in road environments.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"129 ","pages":"Article 110741"},"PeriodicalIF":4.9000,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625006846","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Pedestrian behavior prediction is one of the most critical tasks in urban driving scenarios, playing a key role in ensuring road safety. Traditional learning-based methods have relied on vision models for pedestrian behavior prediction. However, fully understanding pedestrians’ behaviors in advance is very challenging due to the complex driving environments and the multifaceted interactions between pedestrians and road elements. Additionally, these methods often show a limited understanding of driving environments not included in the training. The emergence of Multimodal Large Language Models (MLLMs) provides an innovative approach to addressing these challenges through advanced reasoning capabilities. This paper presents OmniPredict, the first study to apply GPT-4o(mni), a state-of-the-art MLLM, for pedestrian behavior prediction in urban driving scenarios. We assessed the model using the JAAD and WiDEVIEW datasets, which are widely used for pedestrian behavior analysis. Our method utilized multiple contextual modalities and achieved 67% accuracy in a zero-shot setting without any task-specific training, surpassing the performance of the latest MLLM baselines by 10%. Furthermore, when incorporating additional contextual information, the experimental results demonstrated a significant increase in prediction accuracy across four behavior types (crossing, occlusion, action, and look). We also validated the model s generalization ability by comparing its responses across various road environment scenarios. OmniPredict exhibits strong generalization capabilities, demonstrating robust decision-making in diverse and unseen driving rare scenarios. These findings highlight the potential of MLLMs to enhance pedestrian behavior prediction, paving the way for safer and more informed decision-making in road environments.
期刊介绍:
The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency.
Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.