Scene-independent feature- and classifier-based vehicle headlight and shadow removal in video sequences

Qun Li, Edgar A. Bernal, Matthew Shreve, R. Loce
{"title":"Scene-independent feature- and classifier-based vehicle headlight and shadow removal in video sequences","authors":"Qun Li, Edgar A. Bernal, Matthew Shreve, R. Loce","doi":"10.1109/WACVW.2016.7470115","DOIUrl":null,"url":null,"abstract":"Detection of moving and foreground objects is a key step in video-based object tracking within computer vision applications such as surveillance and traffic monitoring. Foreground object detection and segmentation is usually performed based on appearance. Hence, significant detection errors can be incurred due to shadows and light sources. Most existing shadow detection algorithms exploit a large set of assumptions to limit complexity, and at the same time, rely on carefully selected parameters either in the shadow model or the decision threshold. This limits their accuracy and extensibility to different scenarios. Furthermore, most traditional shadow detection algorithms operate on each pixel in the originally detected foreground mask and make pixel-wise decisions, which is not only time-consuming but also error-prone. Little work has been done to address false foreground detection caused by vehicle headlights during nighttime. In this paper, we introduce an efficient and effective algorithm for headlight/shadow removal in modelbased foreground detection via background estimation and subtraction. The underlying assumption is that headlights and shadows do not significantly affect the texture of the background. We train a classifier to discriminate between background affected and unaffected by shadows or headlights in a novel intermediate feature space. Advantages resulting from the choice of feature space in our approach include robustness to differences in background texture (i.e., the method is not scene-dependent), larger discriminability between positive and negative samples, and simplification of the training process.","PeriodicalId":185674,"journal":{"name":"2016 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Winter Applications of Computer Vision Workshops (WACVW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACVW.2016.7470115","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Detection of moving and foreground objects is a key step in video-based object tracking within computer vision applications such as surveillance and traffic monitoring. Foreground object detection and segmentation is usually performed based on appearance. Hence, significant detection errors can be incurred due to shadows and light sources. Most existing shadow detection algorithms exploit a large set of assumptions to limit complexity, and at the same time, rely on carefully selected parameters either in the shadow model or the decision threshold. This limits their accuracy and extensibility to different scenarios. Furthermore, most traditional shadow detection algorithms operate on each pixel in the originally detected foreground mask and make pixel-wise decisions, which is not only time-consuming but also error-prone. Little work has been done to address false foreground detection caused by vehicle headlights during nighttime. In this paper, we introduce an efficient and effective algorithm for headlight/shadow removal in modelbased foreground detection via background estimation and subtraction. The underlying assumption is that headlights and shadows do not significantly affect the texture of the background. We train a classifier to discriminate between background affected and unaffected by shadows or headlights in a novel intermediate feature space. Advantages resulting from the choice of feature space in our approach include robustness to differences in background texture (i.e., the method is not scene-dependent), larger discriminability between positive and negative samples, and simplification of the training process.
视频序列中基于场景无关特征和分类器的车辆前照灯和阴影去除
在监控和交通监控等计算机视觉应用中,运动和前景物体的检测是基于视频的物体跟踪的关键步骤。前景目标检测和分割通常是基于外观进行的。因此,由于阴影和光源的影响,可能会产生显著的检测误差。大多数现有的阴影检测算法利用大量的假设来限制复杂性,同时,依赖于阴影模型或决策阈值中精心选择的参数。这限制了它们在不同场景下的准确性和可扩展性。此外,传统的阴影检测算法大多是对原检测前景蒙版中的每个像素进行操作,并根据像素进行决策,这不仅耗时而且容易出错。在解决夜间车辆前灯引起的虚假前景检测方面做的工作很少。在本文中,我们介绍了一种基于背景估计和减法的基于模型的前景检测中去除前照灯/阴影的高效算法。潜在的假设是,前灯和阴影不会显著影响背景的纹理。我们在一个新的中间特征空间中训练一个分类器来区分受阴影或前灯影响的背景和未受阴影或前灯影响的背景。在我们的方法中,特征空间的选择带来的优势包括对背景纹理差异的鲁棒性(即,该方法不依赖于场景),正负样本之间更大的可判别性,以及训练过程的简化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信