Gad Gad, Ahmed Mahmoud Annaby, N. Negied, M. Darweesh
{"title":"Real-Time Lane Instance Segmentation Using SegNet and Image Processing","authors":"Gad Gad, Ahmed Mahmoud Annaby, N. Negied, M. Darweesh","doi":"10.1109/NILES50944.2020.9257977","DOIUrl":null,"url":null,"abstract":"The rising interest in assistive and autonomous driving systems throughout the past decade has led to an active research community in perception and scene interpretation problems like lane detection. Traditional lane detection methods rely on specialized, hand-tailored features which is slow and prone to scalability. Recent methods that rely on deep learning and trained on pixel-wise lane segmentation have achieved better results and are able to generalize to a broad range of road and weather conditions. However, practical algorithms must be computationally inexpensive due to limited resources on vehicle-based platforms yet accurate to meet safety measures. In this approach, an encoder-decoder deep learning architecture generates binary segmentation of lanes, then the binary segmentation map is further processed to separate lanes, and a sliding window extracts each lane to produce the lane instance segmentation image. This method was validated on a tusimple data set, achieving competitive results.","PeriodicalId":253090,"journal":{"name":"2020 2nd Novel Intelligent and Leading Emerging Sciences Conference (NILES)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 2nd Novel Intelligent and Leading Emerging Sciences Conference (NILES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NILES50944.2020.9257977","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
The rising interest in assistive and autonomous driving systems throughout the past decade has led to an active research community in perception and scene interpretation problems like lane detection. Traditional lane detection methods rely on specialized, hand-tailored features which is slow and prone to scalability. Recent methods that rely on deep learning and trained on pixel-wise lane segmentation have achieved better results and are able to generalize to a broad range of road and weather conditions. However, practical algorithms must be computationally inexpensive due to limited resources on vehicle-based platforms yet accurate to meet safety measures. In this approach, an encoder-decoder deep learning architecture generates binary segmentation of lanes, then the binary segmentation map is further processed to separate lanes, and a sliding window extracts each lane to produce the lane instance segmentation image. This method was validated on a tusimple data set, achieving competitive results.