Vikas Reddy, Conrad Sanderson, A. Sanin, B. Lovell
{"title":"Adaptive Patch-Based Background Modelling for Improved Foreground Object Segmentation and Tracking","authors":"Vikas Reddy, Conrad Sanderson, A. Sanin, B. Lovell","doi":"10.1109/AVSS.2010.84","DOIUrl":"https://doi.org/10.1109/AVSS.2010.84","url":null,"abstract":"A robust foreground object segmentation technique is proposed, capable of dealing with image sequences containing noise, illumination variations and dynamic backgrounds. The method employs contextual spatial information by analysing each image on an overlapping patch-by-patch basis and obtaining a low-dimensional texture descriptor for each patch. Each descriptor is passed through an adaptive multi-stage classifier, comprised of a likelihood evaluation, an illumination robust measure, and a temporal correlation check. A probabilistic foreground mask generation approach integrates the classification decisions by exploiting the overlapping of patches, ensuring smooth contours of the foreground objects as well as effectively minimising the number of errors. The parameter settings are robust against wide variety of sequences and post-processing of foreground masks is not required. Experiments on the difficult Wallflower and I2R datasets show that the proposed method obtains considerably better results (both qualitatively and quantitatively) than methods based on Gaussian mixture models, feature histograms, and normalised vector distances. Further experiments on the CAVIAR dataset (using several tracking algorithms) indicate that the proposed method leads to considerable improvements in object tracking accuracy.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128347467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Real Time Moving People Detection in Surveillance Scenarios","authors":"Álvaro García-Martín, J. Sanchez","doi":"10.1109/AVSS.2010.33","DOIUrl":"https://doi.org/10.1109/AVSS.2010.33","url":null,"abstract":"In this paper an improved real time algorithm for detectingpedestrians in surveillance video is proposed. Thealgorithm is based on people appearance and defines a personmodel as the union of four models of body parts. Firstly,motion segmentation is performed to detect moving pixels.Then, moving regions are extracted and tracked. Finally,the detected moving objects are classified as human or nonhumanobjects. In order to test and validate the algorithm,we have developed a dataset containing annotated surveillancesequences of different complexity levels focused onthe pedestrians detection. Experimental results over thisdataset show that our approach performs considerably wellat real time and even better than other real and non-realtime approaches from the state of art.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127137968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Ultra-Low-Power Contrast-Based Integrated Camera Node and its Application as a People Counter","authors":"L. Gasparini, R. Manduchi, M. Gottardi","doi":"10.1109/AVSS.2010.26","DOIUrl":"https://doi.org/10.1109/AVSS.2010.26","url":null,"abstract":"We describe the implementation in a self-standing systemof a novel contrast-based binary CMOS imaging sensor.This sensor is characterized by very low power consumptionand wide dynamic range, which makes it attractive forwireless camera network applications. In our implementation,the sensor is interfaced with a Flash-based FPGA processor,which handles data readout and image processing.This self-standing camera node is configured as a system forcounting persons walking through a corridor. Simple featuresare extracted from each image in a video stream at 30fps. A classifier is designed based on the temporal evolutionof these features, which is modeled as a Markov chain. Thevideo stream is then segmented into intervals correspondingto individual persons crossing through the field of view. Experimentalresults are shown in cross-validated tests overreal sequences acquired by the camera.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128798838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Evaluation of Background Subtraction Algorithms without Ground-Truth","authors":"Juan C. Sanmiguel, J. Sanchez","doi":"10.1109/AVSS.2010.21","DOIUrl":"https://doi.org/10.1109/AVSS.2010.21","url":null,"abstract":"In video-surveillance systems, the moving objectsegmentation stage (commonly based on backgroundsubtraction) has to deal with several issues like noise,shadows and multimodal backgrounds. Hence, its failureis inevitable and its automatic evaluation is a desirablerequirement for online analysis. In this paper, we proposea hierarchy of existing performance measures not-basedon ground-truth for video object segmentation. Then, fourmeasures based on color and motion are selected andexamined in detail with different segmentation algorithmsand standard test sequences for video objectsegmentation. Experimental results show that color-basedmeasures perform better than motion-based measures andbackground multimodality heavily reduces the accuracy ofall obtained evaluation results","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131069399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simultaneous Object Recognition and Localization in Image Collections","authors":"Shao-Chuan Wang, Y. Wang","doi":"10.1109/AVSS.2010.47","DOIUrl":"https://doi.org/10.1109/AVSS.2010.47","url":null,"abstract":"This papers presents a weakly supervised method to simultaneouslyaddress object localization and recognitionproblems. Unlike prior work using exhaustive search methodssuch as sliding windows, we propose to learn categoryand image-specific visual words in image collections by extractingdiscriminating feature information via two differenttypes of support vector machines: the standard L2-regularized L1-loss SVM, and the one with L1 regularizationand L2 loss. The selected visual words are used toconstruct visual attention maps, which provide descriptiveinformation for each object category. To preserve local spatialinformation, we further refine these maps by Gaussiansmoothing and cross bilateral filtering, and thus both appearanceand spatial information can be utilized for visualcategorization applications. Our method is not limited toany specific type of image descriptors, or any particularcodebook learning and feature encoding techniques. In thispaper, we conduct preliminary experiments on a subset ofthe Caltech-256 dataset using bag-of-feature (BOF) modelswith SIFT descriptors. We show that the use of our visual attentionmaps improves the recognition performance, whilethe one selected by L1-regularized L2-loss SVMs exhibitsthe best recognition and localization results.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129756445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PETS2010: Dataset and Challenge","authors":"J. Ferryman, A. Ellis","doi":"10.1109/AVSS.2010.90","DOIUrl":"https://doi.org/10.1109/AVSS.2010.90","url":null,"abstract":"This paper describes the crowd image analysis challenge that forms part of the PETS 2010 workshop. The aim of this challenge is to use new or existing systems for i) crowd count and density estimation, ii) tracking of individual(s) within a crowd, and iii) detection of separate flows and specific crowd events, in a real-world environment. The dataset scenarios were filmed from multiple cameras and involve multiple actors.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129361399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Person Re-identification Using Haar-based and DCD-based Signature","authors":"Sławomir Bąk, E. Corvée, F. Brémond, M. Thonnat","doi":"10.1109/AVSS.2010.68","DOIUrl":"https://doi.org/10.1109/AVSS.2010.68","url":null,"abstract":"In many surveillance systems there is a requirement todetermine whether a given person of interest has alreadybeen observed over a network of cameras. This paperpresents two approaches for this person re-identificationproblem. In general the human appearance obtained in onecamera is usually different from the ones obtained in anothercamera. In order to re-identify people the human signatureshould handle difference in illumination, pose andcamera parameters. Our appearance models are based onhaar-like features and dominant color descriptors. The AdaBoostscheme is applied to both descriptors to achieve themost invariant and discriminative signature. The methodsare evaluated using benchmark video sequences with differentcamera views where people are automatically detectedusing Histograms of Oriented Gradients (HOG). The reidentificationperformance is presented using the cumulativematching characteristic (CMC) curve.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130417344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamics Based Trajectory Segmentation for UAV videos","authors":"P. Banerjee, R. Nevatia","doi":"10.1109/AVSS.2010.23","DOIUrl":"https://doi.org/10.1109/AVSS.2010.23","url":null,"abstract":"A novel representation of vehicle trajectories is proposedfor applications in trajectory analysis and activity detection.Specifically, a piecewise arc fitting based smoothingalgorithm is proposed for denoising the trajectories. A dynamicprogram is used to find the optimal arc fit to a giventrajectory. We motivate the usage of dynamic primitivesto parametrize common vehicular activities, and proposea dynamics based trajectory segmentation algorithm. Eachprimitive is modeled using a second order Auto-Regressivemodel, and form useful descriptors for a given vehiculartrajectory. We evaluate both our trajectory smoothing anddynamic trajectory segmentation algorithm on a real UAVvideo dataset, and show performance improvements whichclearly motivate its wide applicability in a general trajectoryanalysis system.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133890078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Younghyun Lee, Taeyup Song, Bonhwa Ku, Seoungseon Jeon, D. Han, Hanseok Ko
{"title":"License Plate Detection Using Local Structure Patterns","authors":"Younghyun Lee, Taeyup Song, Bonhwa Ku, Seoungseon Jeon, D. Han, Hanseok Ko","doi":"10.1109/AVSS.2010.48","DOIUrl":"https://doi.org/10.1109/AVSS.2010.48","url":null,"abstract":"We address the problem of license plate detection invideo surveillance systems. The Adaboost based approach,known for relative ease of implementation, makes use ofdiscriminative features such as edges or Haar-like features.In this paper, we propose a novel detection algorithm basedon local structure patterns for license plate detection. Theproposed algorithm includes post-processing methods toreduce false positive rate using positional and colorinformation of license plates. Experimental resultsdemonstrate effectiveness of the proposed methodcompared","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129665749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Framework Dealing with Uncertainty for Complex Event Recognition","authors":"R. Romdhane, F. Brémond, M. Thonnat","doi":"10.1109/AVSS.2010.39","DOIUrl":"https://doi.org/10.1109/AVSS.2010.39","url":null,"abstract":"This paper presents a constraint-based approach forvideo event recognition with probabilistic reasoning forhandling uncertainty. The main advantage of constraintbasedapproaches is the possibility for human expert tomodel composite events with complex temporal constraints.But the approaches are usually deterministic and do notenable the convenient mechanism of probability reasoningto handle the uncertainty. The first advantage of the proposedapproach is the ability to model and recognize compositeevents with complex temporal constraints. The secondadvantage is that probability theory provides a consistentframework for dealing with uncertain knowledge for arobust and reliable recognition of complex event. This approachis evaluated with 4 real healthcare videos and a publicvideo ETISEO’06. The results are compared with stateof the art method. The comparison shows that the proposedapproach improves significantly the process of recognitionand characterizes the likelihood of the recognized events.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133126369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}