{"title":"鲁棒时空车道检测模型","authors":"Jiyong Zhang, Bo Wang, Hamad Naeem, Shengxin Dai","doi":"10.1177/03611981241260696","DOIUrl":null,"url":null,"abstract":"Lane lines are frequently interrupted in autonomous driving environments because of some objective conditions, such as occlusion or congestion, which often lead to the decreased detection performance of a model. Current detection methods relying on spatial information struggle to detect complete lane lines in such conditions. In this paper, we build a robust lane detection model by fusing spatiotemporal information and dilated convolution. The proposed model is aided by the dilated convolution, which expands the scope of convolutional processes to extract more lane feature information from various perception environments. Convolutional gate recurrent units (ConvGRUs) are employed at the high-level semantic phase to aid the proposed model to get more effective lane feature information by dealing with the spatiotemporal information of consecutive frames. Compared with models FCN, DeepLabv3, RefineNet, SCNN, Cheng-DET, LDNet, SegNet, SegNet-Ego-Lane, Res18, Res34, ResNet-18-SAD, ResNet-34-SAD, ENet-SAD, ReNet-101, R-18-E2E, R-34-E2E, R-101-SAD, R-101-E2E, ResNet34-Qin, LaneNet, PINET(64x32), UNet_ConvLSTMSegNet_ConvLSTM, LDSTNet, extensive experiments on three well-known lane detection benchmarks prove the usefulness of the proposed model, achieving robust results and competitive performance.","PeriodicalId":309251,"journal":{"name":"Transportation Research Record: Journal of the Transportation Research Board","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust Spatiotemporal Lane Detection Model\",\"authors\":\"Jiyong Zhang, Bo Wang, Hamad Naeem, Shengxin Dai\",\"doi\":\"10.1177/03611981241260696\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Lane lines are frequently interrupted in autonomous driving environments because of some objective conditions, such as occlusion or congestion, which often lead to the decreased detection performance of a model. Current detection methods relying on spatial information struggle to detect complete lane lines in such conditions. In this paper, we build a robust lane detection model by fusing spatiotemporal information and dilated convolution. The proposed model is aided by the dilated convolution, which expands the scope of convolutional processes to extract more lane feature information from various perception environments. Convolutional gate recurrent units (ConvGRUs) are employed at the high-level semantic phase to aid the proposed model to get more effective lane feature information by dealing with the spatiotemporal information of consecutive frames. Compared with models FCN, DeepLabv3, RefineNet, SCNN, Cheng-DET, LDNet, SegNet, SegNet-Ego-Lane, Res18, Res34, ResNet-18-SAD, ResNet-34-SAD, ENet-SAD, ReNet-101, R-18-E2E, R-34-E2E, R-101-SAD, R-101-E2E, ResNet34-Qin, LaneNet, PINET(64x32), UNet_ConvLSTMSegNet_ConvLSTM, LDSTNet, extensive experiments on three well-known lane detection benchmarks prove the usefulness of the proposed model, achieving robust results and competitive performance.\",\"PeriodicalId\":309251,\"journal\":{\"name\":\"Transportation Research Record: Journal of the Transportation Research Board\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transportation Research Record: Journal of the Transportation Research Board\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/03611981241260696\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Record: Journal of the Transportation Research Board","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/03611981241260696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Lane lines are frequently interrupted in autonomous driving environments because of some objective conditions, such as occlusion or congestion, which often lead to the decreased detection performance of a model. Current detection methods relying on spatial information struggle to detect complete lane lines in such conditions. In this paper, we build a robust lane detection model by fusing spatiotemporal information and dilated convolution. The proposed model is aided by the dilated convolution, which expands the scope of convolutional processes to extract more lane feature information from various perception environments. Convolutional gate recurrent units (ConvGRUs) are employed at the high-level semantic phase to aid the proposed model to get more effective lane feature information by dealing with the spatiotemporal information of consecutive frames. Compared with models FCN, DeepLabv3, RefineNet, SCNN, Cheng-DET, LDNet, SegNet, SegNet-Ego-Lane, Res18, Res34, ResNet-18-SAD, ResNet-34-SAD, ENet-SAD, ReNet-101, R-18-E2E, R-34-E2E, R-101-SAD, R-101-E2E, ResNet34-Qin, LaneNet, PINET(64x32), UNet_ConvLSTMSegNet_ConvLSTM, LDSTNet, extensive experiments on three well-known lane detection benchmarks prove the usefulness of the proposed model, achieving robust results and competitive performance.