Muhammad Farhan;Hassan Eesaar;Afaq Ahmed;Kil To Chong;Hilal Tayara
{"title":"Transforming Highway Safety With Autonomous Drones and AI: A Framework for Incident Detection and Emergency Response","authors":"Muhammad Farhan;Hassan Eesaar;Afaq Ahmed;Kil To Chong;Hilal Tayara","doi":"10.1109/OJVT.2025.3549387","DOIUrl":null,"url":null,"abstract":"Highway accidents pose serious challenges and safety risks, often resulting in severe injuries and fatalities due to delayed detection and response. Traditional accident management methods heavily rely on manual reporting, which can be sometime inefficient and error-prone resulting in valuable life loss. This paper proposes a novel framework that integrates autonomous aerial systems (drones) with advanced deep learning models to enhance real-time accident detection and response capabilities. The system not only dispatch the drones but also provide live accident footage, accident identification and aids in coordinating emergency response. In this study we implemented our system in Gazebo simulation environment, where an autonomous drone navigates to specified location based on the navigation commands generated by Large Language Model (LLM) by processing the emergency call/transcript. Additionally, we created a dedicated accident dataset to train YOLOv11 m model for precise accident detection. At accident location the drone provides live video feeds and our YOLO model detects the incident, these high-resolution captured images after detection are analyzed by Moondream2, a Vision language model (VLM), for generating detailed textual descriptions of the scene, which are further refined by GPT 4-Turbo, large language model (LLM) for producing concise incident reports and actionable suggestions. This end-to-end system combines autonomous navigation, incident detection and incident response, thus showcasing its potential by providing scalable and efficient solutions for incident response management. The initial implementation demonstrates promising results and accuracy, validated through Gazebo simulation. Future work will focus on implementing this framework to the hardware implementation for real-world deployment in highway incident system.","PeriodicalId":34270,"journal":{"name":"IEEE Open Journal of Vehicular Technology","volume":"6 ","pages":"829-845"},"PeriodicalIF":5.3000,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10918802","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Vehicular Technology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10918802/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Highway accidents pose serious challenges and safety risks, often resulting in severe injuries and fatalities due to delayed detection and response. Traditional accident management methods heavily rely on manual reporting, which can be sometime inefficient and error-prone resulting in valuable life loss. This paper proposes a novel framework that integrates autonomous aerial systems (drones) with advanced deep learning models to enhance real-time accident detection and response capabilities. The system not only dispatch the drones but also provide live accident footage, accident identification and aids in coordinating emergency response. In this study we implemented our system in Gazebo simulation environment, where an autonomous drone navigates to specified location based on the navigation commands generated by Large Language Model (LLM) by processing the emergency call/transcript. Additionally, we created a dedicated accident dataset to train YOLOv11 m model for precise accident detection. At accident location the drone provides live video feeds and our YOLO model detects the incident, these high-resolution captured images after detection are analyzed by Moondream2, a Vision language model (VLM), for generating detailed textual descriptions of the scene, which are further refined by GPT 4-Turbo, large language model (LLM) for producing concise incident reports and actionable suggestions. This end-to-end system combines autonomous navigation, incident detection and incident response, thus showcasing its potential by providing scalable and efficient solutions for incident response management. The initial implementation demonstrates promising results and accuracy, validated through Gazebo simulation. Future work will focus on implementing this framework to the hardware implementation for real-world deployment in highway incident system.