{"title":"Joint spatial-temporal alignment of networked cameras","authors":"Chia-Yeh Lee, Tsuhan Chen, Ming-Yu Shih, Shiaw-Shian Yu","doi":"10.1109/ICDSC.2009.5289361","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289361","url":null,"abstract":"In this paper, we propose a method for aligning networked cameras spatially and temporally. Synchronizing video sequences and recovering spatial information among cameras are crucial steps for applications such as robust tracking and video mosaic. Without prior knowledge of internal and external parameters of cameras, we attempt to automatically estimate their spatial relationship and time offset which is possibly caused by network transmission delay. Our main focus is on cameras with overlapping field of views. Exploiting the fact that spatial and temporal information are related, we use one to boost the other. Initially assuming no time delay, the homography between cameras can be estimated by motion detection. Based on the homography, time difference can thus be recovered by analyzing activities in overlapping regions. We iteratively use spatial and temporal information to boost each other till reaching converging criteria. The algorithm can be extend to finding spatial and temporal relationship in multiple cameras. The experiment is performed in an outdoor parking lot and it is showed that our algorithm can successfully align cameras both in space and time.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114859600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PhD forum: Competing agents for distributed object-tracking in smart camera networks","authors":"Uwe Jänen, J. Hähner, C. Müller-Schloer","doi":"10.1109/ICDSC.2009.5289395","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289395","url":null,"abstract":"This paper describes an approach for object-tracking in large smart camera networks, which is based on software-agents with conflicting goals. The focus is on the system architecture. These software-agents are specialized on concrete functions, e.g. maximizing sensor coverage of a surveillance area by aligning the cameras' fields of view. To reach collaborative behavior, an approach inspired by Multi-Criteria Decision Making (MCDM) of the operations research is used.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125381652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henry Detmold, A. Hengel, A. Dick, Christopher S. Madden, Alex Cichowski, R. Hill
{"title":"Surprisal-aware scheduling of PTZ cameras","authors":"Henry Detmold, A. Hengel, A. Dick, Christopher S. Madden, Alex Cichowski, R. Hill","doi":"10.1109/ICDSC.2009.5289368","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289368","url":null,"abstract":"An approach is presented for scheduling PTZ cameras on guard tours with two or more fields of view. In contrast to the target tracking of previous work, this approach seeks to optimise the coverage of the area under surveillance. Specifically, the aim is to minimise the surprisal (self-information) of events in unobserved fields of view. An entropy driven scheduler based on Kullback-Leibler divergence (information gain) is presented, and compared with three naive schedulers (random, round robin and constant selection of one field of view). Experiments investigate its performance on networks of ten cameras. These are evaluated over factors including four different scheduling approaches, different numbers of fields of view, and different inactive times whilst switching views. They demonstrate the efficacy of the entropy driven scheduler as it outperforms the naive schedulers by a significant margin by favouring certain fields of view that are more likely to reveal events with high surprisal value. The scheduler is target agnostic, as it operates on low level properties of the video signal, specifically, occupancy as determined by background subtraction. This permits an efficient implementation that is independent of the number of targets in the area under surveillance. As each camera is scheduled independently, the approach is scalable via distributed implementation, including on smart cameras.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"609 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116075491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Yang, Subhransu Maji, C. M. Christoudias, Trevor Darrell, Jitendra Malik, S. Sastry
{"title":"Multiple-view object recognition in band-limited distributed camera networks","authors":"A. Yang, Subhransu Maji, C. M. Christoudias, Trevor Darrell, Jitendra Malik, S. Sastry","doi":"10.1109/ICDSC.2009.5289410","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289410","url":null,"abstract":"In this paper, we study the classical problem of object recognition in low-power, low-bandwidth distributed camera networks. The ability to perform robust object recognition is crucial for applications such as visual surveillance to track and identify objects of interest, and compensate visual nuisances such as occlusion and pose variation between multiple camera views. We propose an effective framework to perform distributed object recognition using a network of smart cameras and a computer as the base station. Due to the limited bandwidth between the cameras and the computer, the method utilizes the available computational power on the smart sensors to locally extract and compress SIFT-type image features to represent individual camera views. In particular, we show that between a network of cameras, high-dimensional SIFT histograms share a joint sparse pattern corresponding to a set of common features in 3-D. Such joint sparse patterns can be explicitly exploited to accurately encode the distributed signal via random projection, which is unsupervised and independent to the sensor modality. On the base station, we study multiple decoding schemes to simultaneously recover the multiple-view object features based on the distributed compressive sensing theory. The system has been implemented on the Berkeley CITRIC smart camera platform. The efficacy of the algorithm is validated through extensive simulation and experiments.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"591 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122936183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Localization of distributed wireless cameras","authors":"N. Anjum, A. Cavallaro","doi":"10.1109/ICDSC.2009.5289396","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289396","url":null,"abstract":"Cooperative cameras enable monitoring wide areas and detecting actions and events on a large scale. Due to hardware advancements and economic factors, distributed networks are becoming widely used for a variety of applications ranging from traffic monitoring and surveillance in shopping malls to sports coverage. However, the localization of a large number of cameras in a wide area is not a trivial task. Manual methods are time consuming and can be inaccurate over time. For this reason, we propose an algorithm that uses measurements from the observed objects to perform pair-wise automatic localization of a distributed set of cameras with non-overlapping fields of view. We use the temporal information derived from trajectory information to estimate unobserved trajectory segments, which are then used to estimate the position of the cameras on a common ground plane. Furthermore, the exit-entrance direction of the moving objects is used to estimate the relative orientation of adjacent cameras. We demonstrate the algorithm on a distributed network of simulated cameras with wireless communication and compare it with centralized state-of-the-art approaches.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121882329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kofi Appiah, A. Hunter, Jonathan Owens, Philip Aiken, Katrina Lewis
{"title":"Autonomous real-time surveillance system with distributed IP cameras","authors":"Kofi Appiah, A. Hunter, Jonathan Owens, Philip Aiken, Katrina Lewis","doi":"10.1109/ICDSC.2009.5289387","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289387","url":null,"abstract":"An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128078013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ravishankar Sivalingam, V. Morellas, Daniel Boley, N. Papanikolopoulos
{"title":"Metric learning for semi-supervised clustering of Region Covariance Descriptors","authors":"Ravishankar Sivalingam, V. Morellas, Daniel Boley, N. Papanikolopoulos","doi":"10.1109/ICDSC.2009.5289415","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289415","url":null,"abstract":"In this paper we extend distance metric learning to a new class of descriptors known as Region Covariance Descriptors. Region covariances are becoming increasingly popular as features for object detection and classification over the past few years. Given a set of pairwise constraints by the user, we want to perform semi-supervised clustering of these descriptors aided by metric learning approaches. The covariance descriptors belong to the special class of symmetric positive definite (SPD) tensors, and current algorithms cannot deal with them directly without violating their positive definiteness. In our framework, the distance metric on the manifold of SPD matrices is represented as an L2 distance in a vector space, and a Mahalanobis-type distance metric is learnt in the new space, in order to improve the performance of semi-supervised clustering of region covariances. We present results from clustering of covariance descriptors representing different human images, from single and multiple camera views. This transformation from a set of positive definite tensors to a Euclidean space paves the way for the application of many other vector-space methods to this class of descriptors.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133754885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implementation of Canny edge detection on the WiCa SmartCam architecture","authors":"B. Geelen, Francis Deboeverie, P. Veelaert","doi":"10.1109/ICDSC.2009.5289349","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289349","url":null,"abstract":"There is a rapidly growing demand for cameras containing built-in intelligence for various purposes such as surveillance and identification. Face recognition is an important application for these cameras. Previous research has shown that faces can be well represented by parabola segments using an algorithm which fits parabola segments to edge pixels. Faces are then recognized using a technique which matches parabola segments based on distance and intensity. Considerable computational resources are required for the extraction of the parabola primitives, due to the need for Canny edge detection. This algorithm is well-suited for the new generation of massively parallel SmartCams though, if it is represented in a highly parallelized, low complexity representation adapted to the characteristics of these SmartCam architectures. This paper proposes such an implementation of the Canny edge detection algorithm for the Single Instruction Multiple Data (SIMD) Xetal IC3D processor, resulting in a real-time performance using a smart camera not bigger than a typical surveillance camera.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117316081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PhD forum: Flexible clustering in smart camera networks","authors":"Bernhard Dieber, B. Rinner","doi":"10.1109/ICDSC.2009.5289392","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289392","url":null,"abstract":"Clustering is an important concept for structuring large networks of cameras. In this research we investigate the various trade offs of clustering in networks of smart cameras. Major research questions in this context are (i) how to model clustering, (ii) how to deal with the heterogeneity in large camera networks, and (iii) how to integrate clustering in real-world networks. We have developed a flexible and scalable software suite that supports clustering in camera networks. We present first results in a multi-camera person tracking case study.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"815 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115132017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vikas Reddy, Conrad Sanderson, B. Lovell, A. Bigdeli
{"title":"An efficient background estimation algorithm for embedded smart cameras","authors":"Vikas Reddy, Conrad Sanderson, B. Lovell, A. Bigdeli","doi":"10.1109/ICDSC.2009.5289348","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289348","url":null,"abstract":"Segmentation of foreground objects of interest from an image sequence is an important task in most smart cameras. Background subtraction is a popular and efficient technique used for segmentation. The method assumes that a background model of the scene under analysis is known. However, in many practical circumstances it is unavailable and needs to be estimated from cluttered image sequences. With embedded systems as the target platform, in this paper we propose a sequential technique for background estimation in such conditions, with low computational and memory requirements. The first stage is somewhat similar to that of the recently proposed agglomerative clustering background estimation method, where image sequences are analysed on a block by block basis. For each block location a representative set is maintained which contains distinct blocks obtained along its temporal line. The novelties lie in iteratively filling in background areas by selecting the most appropriate candidate blocks according to the combined frequency responses of extended versions of the candidate block and its neighbourhood. It is assumed that the most appropriate block results in the smoothest response, indirectly enforcing the spatial continuity of structures within a scene. Experiments on real-life surveillance videos demonstrate the advantages of the proposed method.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115316383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}