{"title":"Scene-independent feature- and classifier-based vehicle headlight and shadow removal in video sequences","authors":"Qun Li, Edgar A. Bernal, Matthew Shreve, R. Loce","doi":"10.1109/WACVW.2016.7470115","DOIUrl":null,"url":null,"abstract":"Detection of moving and foreground objects is a key step in video-based object tracking within computer vision applications such as surveillance and traffic monitoring. Foreground object detection and segmentation is usually performed based on appearance. Hence, significant detection errors can be incurred due to shadows and light sources. Most existing shadow detection algorithms exploit a large set of assumptions to limit complexity, and at the same time, rely on carefully selected parameters either in the shadow model or the decision threshold. This limits their accuracy and extensibility to different scenarios. Furthermore, most traditional shadow detection algorithms operate on each pixel in the originally detected foreground mask and make pixel-wise decisions, which is not only time-consuming but also error-prone. Little work has been done to address false foreground detection caused by vehicle headlights during nighttime. In this paper, we introduce an efficient and effective algorithm for headlight/shadow removal in modelbased foreground detection via background estimation and subtraction. The underlying assumption is that headlights and shadows do not significantly affect the texture of the background. We train a classifier to discriminate between background affected and unaffected by shadows or headlights in a novel intermediate feature space. Advantages resulting from the choice of feature space in our approach include robustness to differences in background texture (i.e., the method is not scene-dependent), larger discriminability between positive and negative samples, and simplification of the training process.","PeriodicalId":185674,"journal":{"name":"2016 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Winter Applications of Computer Vision Workshops (WACVW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACVW.2016.7470115","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Detection of moving and foreground objects is a key step in video-based object tracking within computer vision applications such as surveillance and traffic monitoring. Foreground object detection and segmentation is usually performed based on appearance. Hence, significant detection errors can be incurred due to shadows and light sources. Most existing shadow detection algorithms exploit a large set of assumptions to limit complexity, and at the same time, rely on carefully selected parameters either in the shadow model or the decision threshold. This limits their accuracy and extensibility to different scenarios. Furthermore, most traditional shadow detection algorithms operate on each pixel in the originally detected foreground mask and make pixel-wise decisions, which is not only time-consuming but also error-prone. Little work has been done to address false foreground detection caused by vehicle headlights during nighttime. In this paper, we introduce an efficient and effective algorithm for headlight/shadow removal in modelbased foreground detection via background estimation and subtraction. The underlying assumption is that headlights and shadows do not significantly affect the texture of the background. We train a classifier to discriminate between background affected and unaffected by shadows or headlights in a novel intermediate feature space. Advantages resulting from the choice of feature space in our approach include robustness to differences in background texture (i.e., the method is not scene-dependent), larger discriminability between positive and negative samples, and simplification of the training process.