Piyush Pawar , Benjamin McManus , Thomas Anthony , Jingzhen Yang , Thomas Kerwin , Despina Stavrinos
{"title":"模拟环境中危险标注和眼动追踪的人工智能自动化解决方案","authors":"Piyush Pawar , Benjamin McManus , Thomas Anthony , Jingzhen Yang , Thomas Kerwin , Despina Stavrinos","doi":"10.1016/j.aap.2025.108075","DOIUrl":null,"url":null,"abstract":"<div><div>High-fidelity simulators and sensors are commonly used in research to create immersive environments for studying real-world problems. This setup records detailed data, generating large datasets. In driving research, a full-scale car model repurposed as a driving simulator allows human subjects to navigate realistic driving scenarios. Data from these experiments are collected in raw form, requiring extensive manual annotation of roadway elements such as hazards and distractions. This process is often time-consuming, labor-intensive, and repetitive, causing delays in research progress.</div><div>This paper proposes an AI-driven solution to automate these tasks, enabling researchers to focus on analysis and advance their studies efficiently. The solution builds on previous driving behavior research using a high-fidelity full-cab simulator equipped with gaze-tracking cameras. It extends the capabilities of the earlier system described in Pawar’s (2021) “Hazard Detection in Driving Simulation using Deep Learning”, which performed only hazard detection. The enhanced system now integrates both hazard annotation and gaze-tracking data.</div><div>By combining vehicle handling parameters with drivers’ visual attention data, the proposed method provides a unified, detailed view of participants’ driving behavior across various simulated scenarios. This approach streamlines data analysis, accelerates research timelines, and enhances understanding of driving behavior.</div></div>","PeriodicalId":6926,"journal":{"name":"Accident; analysis and prevention","volume":"218 ","pages":"Article 108075"},"PeriodicalIF":5.7000,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence automated solution for hazard annotation and eye tracking in a simulated environment\",\"authors\":\"Piyush Pawar , Benjamin McManus , Thomas Anthony , Jingzhen Yang , Thomas Kerwin , Despina Stavrinos\",\"doi\":\"10.1016/j.aap.2025.108075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>High-fidelity simulators and sensors are commonly used in research to create immersive environments for studying real-world problems. This setup records detailed data, generating large datasets. In driving research, a full-scale car model repurposed as a driving simulator allows human subjects to navigate realistic driving scenarios. Data from these experiments are collected in raw form, requiring extensive manual annotation of roadway elements such as hazards and distractions. This process is often time-consuming, labor-intensive, and repetitive, causing delays in research progress.</div><div>This paper proposes an AI-driven solution to automate these tasks, enabling researchers to focus on analysis and advance their studies efficiently. The solution builds on previous driving behavior research using a high-fidelity full-cab simulator equipped with gaze-tracking cameras. It extends the capabilities of the earlier system described in Pawar’s (2021) “Hazard Detection in Driving Simulation using Deep Learning”, which performed only hazard detection. The enhanced system now integrates both hazard annotation and gaze-tracking data.</div><div>By combining vehicle handling parameters with drivers’ visual attention data, the proposed method provides a unified, detailed view of participants’ driving behavior across various simulated scenarios. This approach streamlines data analysis, accelerates research timelines, and enhances understanding of driving behavior.</div></div>\",\"PeriodicalId\":6926,\"journal\":{\"name\":\"Accident; analysis and prevention\",\"volume\":\"218 \",\"pages\":\"Article 108075\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2025-05-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Accident; analysis and prevention\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0001457525001617\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ERGONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accident; analysis and prevention","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0001457525001617","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ERGONOMICS","Score":null,"Total":0}
Artificial intelligence automated solution for hazard annotation and eye tracking in a simulated environment
High-fidelity simulators and sensors are commonly used in research to create immersive environments for studying real-world problems. This setup records detailed data, generating large datasets. In driving research, a full-scale car model repurposed as a driving simulator allows human subjects to navigate realistic driving scenarios. Data from these experiments are collected in raw form, requiring extensive manual annotation of roadway elements such as hazards and distractions. This process is often time-consuming, labor-intensive, and repetitive, causing delays in research progress.
This paper proposes an AI-driven solution to automate these tasks, enabling researchers to focus on analysis and advance their studies efficiently. The solution builds on previous driving behavior research using a high-fidelity full-cab simulator equipped with gaze-tracking cameras. It extends the capabilities of the earlier system described in Pawar’s (2021) “Hazard Detection in Driving Simulation using Deep Learning”, which performed only hazard detection. The enhanced system now integrates both hazard annotation and gaze-tracking data.
By combining vehicle handling parameters with drivers’ visual attention data, the proposed method provides a unified, detailed view of participants’ driving behavior across various simulated scenarios. This approach streamlines data analysis, accelerates research timelines, and enhances understanding of driving behavior.
期刊介绍:
Accident Analysis & Prevention provides wide coverage of the general areas relating to accidental injury and damage, including the pre-injury and immediate post-injury phases. Published papers deal with medical, legal, economic, educational, behavioral, theoretical or empirical aspects of transportation accidents, as well as with accidents at other sites. Selected topics within the scope of the Journal may include: studies of human, environmental and vehicular factors influencing the occurrence, type and severity of accidents and injury; the design, implementation and evaluation of countermeasures; biomechanics of impact and human tolerance limits to injury; modelling and statistical analysis of accident data; policy, planning and decision-making in safety.