{"title":"Optimizing load distribution in camera networks with a hypergraph model of coverage topology","authors":"A. Mavrinac, Xiang Chen","doi":"10.1109/ICDSC.2011.6042903","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042903","url":null,"abstract":"A new topological model of camera network coverage, based on a weighted hypergraph representation, is introduced. The model's theoretical basis is the coverage strength model, presented in previous work and summarized here. Optimal distribution of task processing is approximated by adapting a local search heuristic for parallel machine scheduling to this hypergraph model. Simulation results are presented to demonstrate its effectiveness.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132836889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PhD forum: Using participatory camera networks for object tracking","authors":"Manoj R. Rege, V. Handziski, A. Wolisz","doi":"10.1109/ICDSC.2011.6042947","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042947","url":null,"abstract":"Mobile devices come embedded with various sensors of which camera is most widely used, however operated by the device owner for the individual needs. We feel that there is a potential for these camera sensors to be used in an on demand participatory fashion for a variety of community based sensing applications. In this paper, we propose forming local participatory camera networks for on-demand object tracking. We explain the various challenges such as mobile device discovery and configuration, willingness to participate, and direct communication cooperation between mobile devices. Finally, we define metrics for performance evaluation of these camera participatory networks for object tracking and discuss an application scenario for the same.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134396852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Fernández-Berni, R. Carmona-Galán, G. Cembrano, Á. Zarándy, Á. Rodríguez-Vázquez
{"title":"Demo: Real-time remote reporting of active regions with Wi-FLIP","authors":"J. Fernández-Berni, R. Carmona-Galán, G. Cembrano, Á. Zarándy, Á. Rodríguez-Vázquez","doi":"10.1109/ICDSC.2011.6042948","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042948","url":null,"abstract":"This paper describes a real-time application programmed into Wi-FLIP, a wireless smart camera resulting from the integration of FLIP-Q, a focal-plane low-power image processor, and Imote2, a commercial WSN platform. The application, though simple, shows the potentiality of the reduced scene representations achievable at FLIP-Q to speed up the processing. It consists of detecting the active regions within the scene being surveyed, that is, those regions undergoing thresholded variations with respect to the background. If an activity pattern is prescribed, FLIP-Q enables the reconfigurability of the image plane accordingly, making its detection and tracking easier. For each frame, the number of active regions is calculated and wirelessly reported in real time. A base station picks up the radio signal and sends the information to a PC via USB, also in real time. Frame rates up to around 10fps have been achieved, although it greatly depends on the light conditions and the image plane division grid.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134598973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuele Martelli, Diego Tosato, M. Cristani, Vittorio Murino
{"title":"FPGA-based pedestrian detection using array of covariance features","authors":"Samuele Martelli, Diego Tosato, M. Cristani, Vittorio Murino","doi":"10.1109/ICDSC.2011.6042923","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042923","url":null,"abstract":"In this paper we propose a pedestrian detection algorithm and its implementation on a Xilinx Virtex-4 FPGA. The algorithm is a sliding window-based classifier, that exploits a recently designed descriptor, the covariance of features, for characterizing pedestrians in a robust way. In the paper we show how such descriptor, originally suited for maximizing accuracy performances without caring about timings, can be quickly computed in an elegant, parallel way on the FPGA board. A grid of overlapped covariances extracts information from the sliding window, and feeds a linear Support Vector Machine that performs the detection. Experiments are performed on the INRIA pedestrian benchmark; the performances of the FPGA-based detector are discussed in terms of required computational effort and accuracy, showing state-of-the-art detection performances under excellent timings and economic memory usage.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122175303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning proactive control strategies for PTZ cameras","authors":"Wiktor Starzyk, F. Qureshi","doi":"10.1109/ICDSC.2011.6042928","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042928","url":null,"abstract":"This paper introduces a camera network capable of automatically learning proactive control strategies that enable a set of active pan/tilt/zoom (PTZ) cameras, supported by wide-FOV passive cameras, to provide persistent coverage of the scene. When a situation is encountered for the first time, a reasoning module performs PTZ camera assignments and handoffs. The results of this reasoning exercise are 1) generalized so as to be applicable to many other similar situations and 2) stored in a production system for later use. When a “similar” situation is encountered in the future, the production-system reacts instinctively and performs camera assignments and handoffs, bypassing the reasoning module. Over time the proposed camera network reduces its reliance on the reasoning module to perform camera assignments and handoffs, consequently becoming more responsive and computationally efficient.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114064612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geoffrey Taylor, Ping Wang, Z. Rasheed, N. Haering
{"title":"Rapid discriminative detection for smart camera applications","authors":"Geoffrey Taylor, Ping Wang, Z. Rasheed, N. Haering","doi":"10.1109/ICDSC.2011.6042912","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042912","url":null,"abstract":"Tracking-by-detection is an attractive paradigm for intelligent visual surveillance applications where clutter, lighting variations, target overlap and occlusions hamper conventional background modeling. However, state-of-the-art vehicle and pedestrian detectors based on discriminative classification are too computationally expensive for real-time implementation on embedded smart cameras. This paper presents the Generative Focus of Attention-Discriminative Validation (GFA-DV) detector which uses generative target detection to greatly improve the efficiency of discriminative classification. The proposed method gains further efficiency by using a hierarchical visual codebook to enable each stage of the detector to efficiently utilize the same features within a different quantization of the feature space. This approach reduces the expense of feature matching compared to multiple flat codebooks. The proposed GFA-DV detector is experimentally compared to several state-of-the-art methods, and shown to perform better than other efficient detectors while achieving a 100 times speedup over more accurate detectors.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"25 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114129742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Demo: Mouse sensor networks, the smart camera","authors":"M. Camilli, R. Kleihorst","doi":"10.1109/ICDSC.2011.6042944","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042944","url":null,"abstract":"This paper describes an extremely low-cost smart camera with imaging sensor, freely programmable DSP, power control and wired/wireless networking capabilities. The power consumption reaches from 3mW to 240mW depending on load and transmission rates and the BOM for a single device is now 25 euros. We were able to reduce both the power consumption and price by going to minimal resolution imagers (30×30 pixels) allowing us to reduce the performance demands on the DSP engine. The lower resolution, although with processing frame rates of up to 80fps, still allows many common applications for visual sensors such as object detection, fall detection, motion estimation and face detection. In addition, the resolution is low enough to guarantee privacy.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114593073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic reconfiguration of video sensor networks for optimal 3D coverage","authors":"C. Piciarelli, C. Micheloni, G. Foresti","doi":"10.1109/ICDSC.2011.6042905","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042905","url":null,"abstract":"During the last years, the research in the field of video analytics has focused more and more on video sensor networks. Although single-sensor processing is still an open research field, practical applications nowadays require video analysis systems to explicitly consider multiple sensors at once, since the use of multiple sensors can lead to better algorithms for tracking, object recognition, etc. However, given a network of video sensors, it is not always clear how the network should be configured (in terms of sensor orientations) in order to optimize the system performance. In this work we propose a method to compute a (locally) optimal network configuration maximizing the coverage of a 3D environment, given that a relevance map of the environment exists, expressing the coverage priorities for each zone. The proposed method relies on a transformation projecting the observed environment into a new space where the problem can be solved by means of standard techniques such as the Expectation-Maximization algorithm applied to Gaussian Mixture Models.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122081211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Hariri, Shabnam Abtahi, S. Shirmohammadi, Luc Martel
{"title":"Demo: Vision based smart in-car camera system for driver yawning detection","authors":"B. Hariri, Shabnam Abtahi, S. Shirmohammadi, Luc Martel","doi":"10.1109/ICDSC.2011.6042952","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042952","url":null,"abstract":"In this demo we will present a vision-based smart environment using in-car cameras that can be used for real time tracking and monitoring of a driver in order to detect the driver's drowsiness based on yawning detection. As driver fatigue and drowsiness is a major cause behind a large number of road accidents, the assistive systems that monitor a driver's level of drowsiness and alert the driver in case of vigilance can play an important role in the prevention of such accidents. Our system is built on the top of an embedded platform, called APEX™ from CogniVue Corp., that is easy and practical for installation inside a car. Moreover, we have aimed at optimizing the system in a way that it meets the real time requirements of the monitoring task while relying on the limited processing power of the embedded platform.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132151842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced patrol course planning method for multiple mobile surveillance cameras","authors":"Yoichi Tomioka, Atsushi Takara, H. Kitazawa","doi":"10.1109/ICDSC.2011.6042933","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042933","url":null,"abstract":"Video surveillance systems are becoming increasingly important for crime investigation and deterrence. By the rapid advance of mobile robot technologies, mobile surveillance cameras are becoming an attractive option for the video surveillance systems. In this paper, we propose a method for obtaining the minimum number of mobile surveillance cameras and their shortest patrol courses under the following two conditions. First, the restriction of the visibility must be taken into consideration. Second, each region must be observed at a certain interval. In our experiments, we demonstrate that effective patrol courses for mobile surveillance cameras can be generated.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124951572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}