2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)最新文献

筛选
英文 中文
Optimal sensor placement for surveillance of large spaces 最佳传感器放置监视大空间
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289398
S. Indu, S. Chaudhury, Nikhil R. Mittal, A. Bhattacharyya
{"title":"Optimal sensor placement for surveillance of large spaces","authors":"S. Indu, S. Chaudhury, Nikhil R. Mittal, A. Bhattacharyya","doi":"10.1109/ICDSC.2009.5289398","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289398","url":null,"abstract":"Visual sensor network design facilitates applications such as intelligent rooms, video surveillance, automatic multi-camera tracking, activity recognition etc. These applications require an efficient visual sensor layout which provides a minimum level of image quality or image resolution. This paper addresses the practical problem of optimally placing the multiple PTZ cameras to ensure maximum coverage of user defined priority areas with optimum values of parameters like pan, tilt, zoom and the locations of the cameras. The proposed algorithm works offline and does not require camera calibration. We mapped this problem as an optimization problem using Genetic Algorithm, by defining, coverage matrix as a set of sensor parameters and the space model parameters like priority areas, obstacles and feasible locations of the sensors, and by modelling discrete spaces using probabilistic frame work. We minimized the probability of occlusion due to randomly moving objects by covering each priority area using multiple cameras. The proposed method will be applicable for surveillance of large spaces with discrete priority areas like a hall with more than one entrance or many events happening at different locations in a hall eg.Casino. As we are optimizing the parameters like pan, tilt, zoom and even the locations of the cameras, the coverage provided by this approach will assure good resolution, which improves the QOS of the visual sensor network.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127079884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Template matching based tracking of players in indoor team sports 基于模板匹配的室内团队运动运动员跟踪
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289408
E. Monier, P. Wilhelm, U. Rückert
{"title":"Template matching based tracking of players in indoor team sports","authors":"E. Monier, P. Wilhelm, U. Rückert","doi":"10.1109/ICDSC.2009.5289408","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289408","url":null,"abstract":"This paper presents a video tracking system for tracking players in indoor sports using two high quality digital cameras. The tracking algorithm is a based on template matching technique taking into consideration closed world assumptions. The output of the system can be visualized interactively for convenient analysis of player movements. The implementation has been efficiently done as a software system that can be used by coaches and sport scientists.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127753694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
PhD forum: A distributed architecture for object tracking across intelligent vision sensor network with constrained resources 博士论坛:资源受限的智能视觉传感器网络目标跟踪的分布式架构
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289388
J. Goshorn, R. Cruz, Serge J. Belongie
{"title":"PhD forum: A distributed architecture for object tracking across intelligent vision sensor network with constrained resources","authors":"J. Goshorn, R. Cruz, Serge J. Belongie","doi":"10.1109/ICDSC.2009.5289388","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289388","url":null,"abstract":"Tracking objects across a network of intelligent vision sensors requires an architecture to distribute intelligent processing algorithms locally to the intelligent vision sensor and an algorithm for the communication of the acquired information to nearby sensors for collaboration and hand-offs of tracked objects. Additionally, the selection of which intelligent algorithms need to be performed at each intelligent sensor, and the management of constrained resources of the network, including network capacity (transmission rates), processing capacity (local processing power of sensor node) and in some cases, battery life of the sensor node must also occur. In the case of object tracking, as the number of tracked objects in the network increase, the resources consumed increases, as more processing power is required to create object descriptors and more networking resources are required to transmit information between sensors to collaboratively track the object. The local processing of intelligent vision algorithms at the vision node transforms high data-rate raw video data into low data rate features to be communicated across the network, thus relieving the networking capacity constraint. We focus on, what we view as the key resource, the sensor nodes' processing capacity, in creating a cluster-based distributed object tracking architecture, which includes resource management for processing capacities of the intelligent sensor nodes.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"07 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129797557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resolution mosaic-based Smart Camera for video surveillance 基于马赛克分辨率的视频监控智能摄像头
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289364
M. A. Salem, Kristian Klaus, F. Winkler, B. Meffert
{"title":"Resolution mosaic-based Smart Camera for video surveillance","authors":"M. A. Salem, Kristian Klaus, F. Winkler, B. Meffert","doi":"10.1109/ICDSC.2009.5289364","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289364","url":null,"abstract":"Video surveillance is one of the most data intensive applications. A typical video surveillance system consists of one or multiple video cameras, a central storage unit, and a central processing unit. At least two bottlenecks exist: First, the transmission capacity is limited, especially for raw data. Second, the central processing unit has to process the incoming data to give results in real time. Therefore, we propose an FPGA-based embedded camera system which performs all steps of image acquisition, region of interest extraction, generation of a multiresolution image, and image transmission. The proposed pipeline-based architecture allows a real time wavelet-based image segmentation and a detection of moving objects for surveillance purposes. The system is integrated in a single FPGA using external RAM as storage for images and for a Linux operating system which controls the data flow. With the pipeline concept and a Linux device driver it is possible to create a system for streaming the results of an image processing through a GbE interface. A real time processing is achieved. The camera transmits the captured images with 30 Mpixel/s, but the system is able to process 100 Mpixel/s.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121183967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A distributed camera system for multi-resolution surveillance 用于多分辨率监控的分布式摄像系统
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289413
N. Bellotto, Eric Sommerlade, Ben Benfold, C. Bibby, I. Reid, D. Roth, Carles Fernández Tena, L. Gool, Jordi Gonzàlez
{"title":"A distributed camera system for multi-resolution surveillance","authors":"N. Bellotto, Eric Sommerlade, Ben Benfold, C. Bibby, I. Reid, D. Roth, Carles Fernández Tena, L. Gool, Jordi Gonzàlez","doi":"10.1109/ICDSC.2009.5289413","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289413","url":null,"abstract":"We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130679307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
View-invariant full-body gesture recognition via multilinear analysis of voxel data 基于体素数据多线性分析的视不变全身手势识别
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289411
Bo Peng, G. Qian, Stjepan Rajko
{"title":"View-invariant full-body gesture recognition via multilinear analysis of voxel data","authors":"Bo Peng, G. Qian, Stjepan Rajko","doi":"10.1109/ICDSC.2009.5289411","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289411","url":null,"abstract":"This paper presents a gesture recognition framework using voxel data obtained through visual hull reconstruction from multiple cameras. View-invariant pose descriptors are extracted by projecting voxel data onto a low dimensional pose coefficient space using multilinear analysis. Gestures are then treated as sequences of pose descriptors and represented by hidden Markov models for gesture recognition. Promising results have been obtained using a public data set containing 11 single-person gestures and another data set including seven two-people cooperative dance gestures.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130707121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Tracking in sparse multi-camera setups using stereo vision 使用立体视觉的稀疏多摄像机跟踪设置
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289371
G. Englebienne, T. V. Oosterhout, B. Kröse
{"title":"Tracking in sparse multi-camera setups using stereo vision","authors":"G. Englebienne, T. V. Oosterhout, B. Kröse","doi":"10.1109/ICDSC.2009.5289371","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289371","url":null,"abstract":"Tracking with multiple cameras with nonoverlapping fields of view is challenging due to the differences in appearance that objects typically have when seen from different cameras. In this paper we use a probabilistic approach to track people across multiple, sparsely distributed cameras, where an observation corresponds to a person walking through the field of view of a camera. Modelling appearance and spatio-temporal aspects probabilistically allows us to deal with the uncertainty but, to obtain good results, it is important to maximise the information content of the features we extract from the raw video images. Occlusions and ambiguities within an observation result in noise, thus making the inference less confident. In this paper, we propose to position stereo cameras on the ceiling, facing straight down, thus greatly reducing the possibility of occlusions. This positioning also leads to specific requirements of the algorithms for feature extraction, however. Here, we show that depth information can be used to solve ambiguities and extract meaningful features, resulting in significant improvements in tracking accuracy.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128978457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Multi-camera track-before-detect 多幅相机track-before-detect
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289405
M. Taj, A. Cavallaro
{"title":"Multi-camera track-before-detect","authors":"M. Taj, A. Cavallaro","doi":"10.1109/ICDSC.2009.5289405","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289405","url":null,"abstract":"We present a novel multi-camera multi-target fusion and tracking algorithm for noisy data. Information fusion is an important step towards robust multi-camera tracking and allows us to reduce the effect of projection and parallax errors as well as of the sensor noise. Input data from each camera view are projected on a top-view through multi-level homographic transformations. These projected planes are then collapsed onto the top-view to generate a detection volume. To increase track consistency with the generated noisy data we propose to use a track-before-detect particle filter (TBD-PF) on a 5D state-space. TBD-PF is a Bayesian method which extends the target state with the signal intensity and evaluates each image segment against the motion model. This results in filtering components belonging to noise only and enables tracking without the need of hard thresholding the signal. We demonstrate and evaluate the proposed approach on real multi-camera data from a basketball match.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129996714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Face tracking and recognition by using omnidirectional sensor network 基于全向传感器网络的人脸跟踪与识别
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289367
Yuzuko Utsumi, Y. Iwai
{"title":"Face tracking and recognition by using omnidirectional sensor network","authors":"Yuzuko Utsumi, Y. Iwai","doi":"10.1109/ICDSC.2009.5289367","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289367","url":null,"abstract":"In recent years, security camera systems have been installed in various public facilities. More intelligent processes are needed to track people in image sequences for security camera systems. In this paper, we propose a face tracking and recognition method based on a Bayesian framework. We assume that an observed space is three-dimensional, and we estimate the 3D position of a person. We use facial 3D shape, movement, and texture models for face tracking and recognition. Omnidirectional image sensors are used to acquire image sequences of a walking person because the sensors have a wide view and are suitable for object tracking. Our system generates 3D positional hypotheses based on the facial movement model and these positional hypotheses are projected onto an image plane. Image features are extracted from projected hypotheses and the system distinguishes faces using these image features. Our evaluation experiments show that our proposed method is effective for face tracking, and that tracking accuracy is proportional to the number of cameras used.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128162938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Abnormal motion detection in a real-time smart camera system 实时智能摄像系统中的异常运动检测
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289359
Mona Akbarniai Tehrani, R. Kleihorst, Peter B. L. Meijer, L. Spaanenburg
{"title":"Abnormal motion detection in a real-time smart camera system","authors":"Mona Akbarniai Tehrani, R. Kleihorst, Peter B. L. Meijer, L. Spaanenburg","doi":"10.1109/ICDSC.2009.5289359","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289359","url":null,"abstract":"This paper discusses a method for abnormal motion detection and its real-time implementation on a smart camera. Abnormal motion detection is a surveillance technique that only allows unfamiliar motion patterns to result in alarms. Our approach has two phases. First, normal motion is detected and the motion paths are trained, building up a model of normal behaviour. Feed-forward neural networks are here used for learning. Second, abnormal motion is detected by comparing the current observed motion to the stored model. A complete demonstration system is implemented to detect abnormal paths of persons moving in an indoor space. As platform we used a wireless smart camera system containing an SIMD (Single-Instruction Multiple-Data) processor for real-time detection of moving persons and an 8051 microcontroller for implementing the neural network. The 8051 also functions as camera host to broadcast abnormal events using ZigBee to a main network system.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132393686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信