{"title":"TADS: a novel dataset for road traffic accident detection from a surveillance perspective","authors":"Yachuang Chai, Jianwu Fang, Haoquan Liang, Wushouer Silamu","doi":"10.1007/s11227-024-06429-7","DOIUrl":null,"url":null,"abstract":"<p>With the continuous development of socio-economics, the rapid increase in the use of road vehicles has led to increasingly severe issues regarding traffic accidents. Timely and accurate detection of road traffic accidents is crucial for mitigating casualties and alleviating traffic congestion. Consequently, road traffic accident detection has become a focal point of research recently. With the assistance of advanced technologies such as deep learning, researchers have designed more accurate and effective methods for detecting road traffic accidents. However, deep learning models are often constrained by the scale and distribution of their training datasets. Presently, datasets specifically tailored for road traffic accident detection suffer from limitations in scale and diversity. Furthermore, influenced by the recent surge in research on intelligent driver assistance systems, datasets from the surveillance perspective (the third-person viewpoint) are fewer than those from the driver’s perspective (the first-person viewpoint). Considering these shortcomings, this paper proposes a new dataset, Traffic Accident Detection from the Perspective of Surveillance (TADS). To the best of our knowledge, we are the first to attempt to detect traffic accident under the surveillance perspective with the aid of eye-gaze data. Leveraging the special data components within this dataset, we design the RF-RG model (input: the RGB and optical flow values of the frames; output: the RGB and gaze values of the predicted frame) for detecting road traffic accidents from a surveillance perspective. Comparative experiments and analyses are conducted with existing major detection methods to validate the efficacy of the proposed dataset and the approach. The TADS dataset has been made available at: https://github.com/cyc-gh/TADS/.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"173 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Supercomputing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11227-024-06429-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the continuous development of socio-economics, the rapid increase in the use of road vehicles has led to increasingly severe issues regarding traffic accidents. Timely and accurate detection of road traffic accidents is crucial for mitigating casualties and alleviating traffic congestion. Consequently, road traffic accident detection has become a focal point of research recently. With the assistance of advanced technologies such as deep learning, researchers have designed more accurate and effective methods for detecting road traffic accidents. However, deep learning models are often constrained by the scale and distribution of their training datasets. Presently, datasets specifically tailored for road traffic accident detection suffer from limitations in scale and diversity. Furthermore, influenced by the recent surge in research on intelligent driver assistance systems, datasets from the surveillance perspective (the third-person viewpoint) are fewer than those from the driver’s perspective (the first-person viewpoint). Considering these shortcomings, this paper proposes a new dataset, Traffic Accident Detection from the Perspective of Surveillance (TADS). To the best of our knowledge, we are the first to attempt to detect traffic accident under the surveillance perspective with the aid of eye-gaze data. Leveraging the special data components within this dataset, we design the RF-RG model (input: the RGB and optical flow values of the frames; output: the RGB and gaze values of the predicted frame) for detecting road traffic accidents from a surveillance perspective. Comparative experiments and analyses are conducted with existing major detection methods to validate the efficacy of the proposed dataset and the approach. The TADS dataset has been made available at: https://github.com/cyc-gh/TADS/.