{"title":"A multi-criteria model for robust foreground extraction","authors":"A. H. Kamkar-Parsi, R. Laganière, M. Bouchard","doi":"10.1145/1099396.1099410","DOIUrl":"https://doi.org/10.1145/1099396.1099410","url":null,"abstract":"Numerous methods are currently available for motion detection using background modeling and subtraction. However, there are still many challenges to take into account such as moving shadows, illumination changes, moving background, relocation of background objects, and initialization with moving objects. This paper provides a new background subtraction algorithm that aggregates the classification results of several foreground extraction techniques based on UV color deviations, probabilistic gradient information and vector deviations, in order to produce a single decision that is more robust to those challenges.","PeriodicalId":196499,"journal":{"name":"Proceedings of the third ACM international workshop on Video surveillance & sensor networks","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134117914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A video analysis framework for soft biometry security surveillance","authors":"Yuan-fang Wang, E. Chang, K. Cheng","doi":"10.1145/1099396.1099412","DOIUrl":"https://doi.org/10.1145/1099396.1099412","url":null,"abstract":"We propose a distributed, multi-camera video analysis paradigm for aiport security surveillance. We propose to use a new class of biometry signatures, which are called soft biometry including a person's height, built, skin tone, color of shirts and trousers, motion pattern, trajectory history, etc., to ID and track errant passengers and suspicious events without having to shut down a whole terminal building and cancel multiple flights. The proposed research is to enable the reliable acquisition, maintenance, and correspondence of soft biometry signatures in a coordinated manner from a large number of video streams for security surveillance. The intellectual merit of the proposed research is to address three important video analysis problems in a distributed, multi-camera surveillance network: sensor network calibration, peer-to-peer sensor data fusion, and stationary-dynamic cooperative camera sensing.","PeriodicalId":196499,"journal":{"name":"Proceedings of the third ACM international workshop on Video surveillance & sensor networks","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126179891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Colombari, M. Cristani, Vittorio Murino, Andrea Fusiello
{"title":"Exemplar-based background model initialization","authors":"Andrea Colombari, M. Cristani, Vittorio Murino, Andrea Fusiello","doi":"10.1145/1099396.1099402","DOIUrl":"https://doi.org/10.1145/1099396.1099402","url":null,"abstract":"Most of the automated video-surveillance applications are based on background (BG) subtraction techniques, that aim at distinguishing moving objects in a static scene. These strategies strongly depend on the BG model, that has to be initialized and updated. A good initialization is crucial for the successive processing. In this paper, we propose a novel method for BG initialization and recovery, that merges interesting ideas coming from the video inpainting and the generative modelling subfields. The method takes as input a video sequence, in which several objects move in front of a stationary BG. Then, a statistical representation of the BG is iteratively built, discarding automatically the moving objects. The method is based on the following hypotheses: (i) a portion of the BG, called sure BG, can be identified with high certainty by using only per-pixel reasoning and (ii) the remaining scene BG can be generated utilizing exemplars of the sure BG. The proposed algorithm is able to exploit these hypotheses in a principled and effective way.","PeriodicalId":196499,"journal":{"name":"Proceedings of the third ACM international workshop on Video surveillance & sensor networks","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128119426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Coopetitive visual surveillance using model predictive control","authors":"V. Singh, P. Atrey","doi":"10.1145/1099396.1099422","DOIUrl":"https://doi.org/10.1145/1099396.1099422","url":null,"abstract":"Active cooperative sensing with multiple sensors is being actively researched in visual surveillance. However, active cooperative sensing often suffers from the delay in information exchange among the sensors and also from sensor reaction delays. This is because simplistic control strategies like Proportional Integral Differential (PID), that do not employ the look-ahead strategy, often fail to counterbalance these delays at real time. Hence, there is a need for more sophisticated interaction and control mechanisms that can overcome the delay problems. In this paper, we propose a coopetitive framework using Model Predictive Control (MPC) which allows the sensors to not only 'compete' as well as 'cooperate' with each other to perform the designated task in the best possible manner but also to dynamically swap their roles and sub-goals rather than just the parameters. MPC is used as a feedback control mechanism to allow sensors to react not only based on past observations but also on possible future events. We demonstrate the utility of our framework in a dual camera surveillance setup with the goal of capturing the high resolution images of intruders in the surveyed rectangular area e.g. an ATM lobby or a museum. The results are promising and clearly establish the efficacy of coopetition as an effective form of interaction between sensors and MPC as a superior feedback mechanism than the PID.","PeriodicalId":196499,"journal":{"name":"Proceedings of the third ACM international workshop on Video surveillance & sensor networks","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117021878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Prati, R. Vezzani, L. Benini, Elisabetta Farella, P. Zappi
{"title":"An integrated multi-modal sensor network for video surveillance","authors":"A. Prati, R. Vezzani, L. Benini, Elisabetta Farella, P. Zappi","doi":"10.1145/1099396.1099415","DOIUrl":"https://doi.org/10.1145/1099396.1099415","url":null,"abstract":"To enhance video surveillance systems, multi-modal sensor integration can be a successful strategy. In this work, a computer vision system able to detect and track people from multiple cameras is integrated with a wireless sensor network mounting PIR (Passive InfraRed) sensors. The two subsystems are briefly described and possible cases in which computer vision algorithms are likely to fail are discussed. Then, simple but reliable outputs from the PIR sensor nodes are exploited to improve the accuracy of the vision system. In particular, two case studies are reported: the first uses the presence detection of PIR sensors to disambiguate between an opened door and a moving person, while the second handles motion direction changes during occlusions. Preliminary results are reported and demonstrate the usefulness of the integration of the two subsystems.","PeriodicalId":196499,"journal":{"name":"Proceedings of the third ACM international workshop on Video surveillance & sensor networks","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129155667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliable real-time foreground detection for video surveillance applications","authors":"Jordi Lluís, Xavier Miralles, Oscar Bastidas","doi":"10.1145/1099396.1099408","DOIUrl":"https://doi.org/10.1145/1099396.1099408","url":null,"abstract":"Foreground segmentation is usually needed as an initial step in video surveillance applications. Background subtraction is typically used to segment moving regions by comparing each new frame to a model of the scene background. We present a segmentation algorithm that works in real-time and efficiently extracts foreground objects from indoor and outdoor scenes that may contain small environment motions. The model adapts quickly to changes in the video which enables very sensitive detection of moving targets. The evaluation performed shows that this approach reliably extracts the foreground with very low false alarms and false misses.","PeriodicalId":196499,"journal":{"name":"Proceedings of the third ACM international workshop on Video surveillance & sensor networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130845829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acquisition of high-resolution images through on-line saccade sequence planning","authors":"Andrew D. Bagdanov, A. del Bimbo, F. Pernici","doi":"10.1145/1099396.1099419","DOIUrl":"https://doi.org/10.1145/1099396.1099419","url":null,"abstract":"This paper considers the problem of scheduling an active observer to visit as many targets in an area of surveillance as possible. We show how it is possible to plan a sequence of decisions regarding what target to look at through such a foveal-sensing action. We propose a framework in which a pan/tilt/zoom camera executes saccades in order to visit, and acquire high resolution images (at least one) of, as many moving targets as possible before they leave the scene. An intelligent choice of the order of sensing the targets can significantly reduce the total dead-time wasted by the active camera and, consequently, its cycle time. We cast the whole problem into a dynamic discrete optimization framework. In particular, we will show that the problem can be solved by modeling the attentional gaze control as a kinetic traveling salesperson problem whose solution is approximated by iteratively solving time dependent orienteering problems.Congestion analysis experiments are reported demonstrating the effectiveness of the solution in acquiring high resolution images of a large number of moving targets in a wide area. The evaluation was conducted with a simulation of a dual camera system in a master-slave configuration. We also report on preliminary experiments conducted using live cameras in a real surveillance environment.","PeriodicalId":196499,"journal":{"name":"Proceedings of the third ACM international workshop on Video surveillance & sensor networks","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124327084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Constructing task visibility intervals for a surveillance system","authors":"Ser-Nam Lim, Anurag Mittal, L. Davis","doi":"10.1145/1099396.1099421","DOIUrl":"https://doi.org/10.1145/1099396.1099421","url":null,"abstract":"One of the goals of a multi-camera surveillance system is to collect useful video clips of objects in the scene. Objects in the collected videos should be unobstructed, in the field of view of the given camera, and meet task-specific resolution requirement. For this purpose, we describe an algorithm that constructs \"task visibility intervals\", which are tuples of information about what to sense (task-object pairs), when to sense (feasible future temporal intervals to start a task) and how to sense (the camera to use and the corresponding viewing angles and focal length). The algorithm first looks for temporal intervals within which the angular extents of objects overlap each other, causing the object farthest from the given camera to be occluded. Outside these intervals, sub-intervals are then constructed such that feasible camera settings exist for capturing the object. Experimental results are provided to illustrate the system capabilities in constructing such task visibility intervals, followed by scheduling them using a greedy algorithm.","PeriodicalId":196499,"journal":{"name":"Proceedings of the third ACM international workshop on Video surveillance & sensor networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128992947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Amnuaykanchanasin, T. Thongkamwitoon, N. Srisawaiwilai, S. Aramvith, T. Chalidabhongse
{"title":"Adaptive parametric statistical background subtraction for video segmentation","authors":"P. Amnuaykanchanasin, T. Thongkamwitoon, N. Srisawaiwilai, S. Aramvith, T. Chalidabhongse","doi":"10.1145/1099396.1099409","DOIUrl":"https://doi.org/10.1145/1099396.1099409","url":null,"abstract":"The Background Subtraction Algorithm has been proven to be a very effective technique for automated video surveillance applications. In statistical approach, background model is usually estimated using Gaussian model and is adaptively updated to deal with changes in dynamic scene environment. However, most algorithms update background parameters linearly. As a result, the classification results are erroneous when performing background convergence process. In this paper, we present a novel learning factor control for adaptive background subtraction algorithm. The method adaptively adjusts the rate of adaptation in background model corresponding to events in video sequence. Experimental results show the algorithm improves classification accuracy compared to other known methods.","PeriodicalId":196499,"journal":{"name":"Proceedings of the third ACM international workshop on Video surveillance & sensor networks","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122778739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time surveillance video display with salience","authors":"Guangyu Wang, T. Wong, P. Heng","doi":"10.1145/1099396.1099403","DOIUrl":"https://doi.org/10.1145/1099396.1099403","url":null,"abstract":"In this paper, we aim at providing a means for efficient display of surveillance video. In video surveillance, usually, there are certain regions of interest (ROIs), such as entrance or exit, and moving objects or persons, which should be paid more attention. By tracking and locally zooming the ROI, the proposed method adds salience on it to help surveillant to locate such important region effectively. Here, salience means highlighting special region locally. Given an input video signal, the ROI is detected first. The original video frame is mapped as a texture on a deformed mesh to produce the zoom effect. The position and shape of the ROI determine the mesh deformation. By experiment, we show that the proposed method is effective and efficient.","PeriodicalId":196499,"journal":{"name":"Proceedings of the third ACM international workshop on Video surveillance & sensor networks","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132161706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}