{"title":"基于双流视频的碰撞和濒临碰撞深度学习模型","authors":"Liang Shi, Feng Guo","doi":"10.1016/j.trc.2024.104794","DOIUrl":null,"url":null,"abstract":"<div><p>The use of videos for effective crash and near-crash prediction can significantly enhance the development of safety countermeasures and emergency response. This paper presents a two-stream hybrid model with temporal and spatial streams for crash and near-crash identification based on front-view video driving data. The novel temporal stream integrates optical flow and TimeSFormer, utilizing divided-space–time attention. The spatial stream employs TimeSFormer with space attention to complement spatial information that is not captured by the optical flow. An XGBoost classifier merges the two streams through late fusion. The proposed approach utilizes data from the Second Strategic Highway Research Program Naturalistic Driving Study, which encompasses 1922 crashes, 6960 near-crashes, and 8611 normal driving segments. The results demonstrate excellent performance, achieving an overall accuracy of 0.894. The F1 scores for crashes, near-crashes, and normal driving segments were 0.760, 0.892, and 0.923, respectively, indicating strong predictive power for all three categories. The proposed approach offers a highly effective and scalable solution for identifying crashes and near-crashes using front-view video driving data and has broad applications in the field of traffic safety.</p></div>","PeriodicalId":54417,"journal":{"name":"Transportation Research Part C-Emerging Technologies","volume":null,"pages":null},"PeriodicalIF":7.6000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Two-stream video-based deep learning model for crashes and near-crashes\",\"authors\":\"Liang Shi, Feng Guo\",\"doi\":\"10.1016/j.trc.2024.104794\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The use of videos for effective crash and near-crash prediction can significantly enhance the development of safety countermeasures and emergency response. This paper presents a two-stream hybrid model with temporal and spatial streams for crash and near-crash identification based on front-view video driving data. The novel temporal stream integrates optical flow and TimeSFormer, utilizing divided-space–time attention. The spatial stream employs TimeSFormer with space attention to complement spatial information that is not captured by the optical flow. An XGBoost classifier merges the two streams through late fusion. The proposed approach utilizes data from the Second Strategic Highway Research Program Naturalistic Driving Study, which encompasses 1922 crashes, 6960 near-crashes, and 8611 normal driving segments. The results demonstrate excellent performance, achieving an overall accuracy of 0.894. The F1 scores for crashes, near-crashes, and normal driving segments were 0.760, 0.892, and 0.923, respectively, indicating strong predictive power for all three categories. The proposed approach offers a highly effective and scalable solution for identifying crashes and near-crashes using front-view video driving data and has broad applications in the field of traffic safety.</p></div>\",\"PeriodicalId\":54417,\"journal\":{\"name\":\"Transportation Research Part C-Emerging Technologies\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2024-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transportation Research Part C-Emerging Technologies\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0968090X24003152\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TRANSPORTATION SCIENCE & TECHNOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Part C-Emerging Technologies","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0968090X24003152","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION SCIENCE & TECHNOLOGY","Score":null,"Total":0}
Two-stream video-based deep learning model for crashes and near-crashes
The use of videos for effective crash and near-crash prediction can significantly enhance the development of safety countermeasures and emergency response. This paper presents a two-stream hybrid model with temporal and spatial streams for crash and near-crash identification based on front-view video driving data. The novel temporal stream integrates optical flow and TimeSFormer, utilizing divided-space–time attention. The spatial stream employs TimeSFormer with space attention to complement spatial information that is not captured by the optical flow. An XGBoost classifier merges the two streams through late fusion. The proposed approach utilizes data from the Second Strategic Highway Research Program Naturalistic Driving Study, which encompasses 1922 crashes, 6960 near-crashes, and 8611 normal driving segments. The results demonstrate excellent performance, achieving an overall accuracy of 0.894. The F1 scores for crashes, near-crashes, and normal driving segments were 0.760, 0.892, and 0.923, respectively, indicating strong predictive power for all three categories. The proposed approach offers a highly effective and scalable solution for identifying crashes and near-crashes using front-view video driving data and has broad applications in the field of traffic safety.
期刊介绍:
Transportation Research: Part C (TR_C) is dedicated to showcasing high-quality, scholarly research that delves into the development, applications, and implications of transportation systems and emerging technologies. Our focus lies not solely on individual technologies, but rather on their broader implications for the planning, design, operation, control, maintenance, and rehabilitation of transportation systems, services, and components. In essence, the intellectual core of the journal revolves around the transportation aspect rather than the technology itself. We actively encourage the integration of quantitative methods from diverse fields such as operations research, control systems, complex networks, computer science, and artificial intelligence. Join us in exploring the intersection of transportation systems and emerging technologies to drive innovation and progress in the field.