2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)最新文献

筛选
英文 中文
Adaptive power control for solar harvesting multimodal wireless smart camera 太阳能采集多模态无线智能相机的自适应功率控制
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289358
M. Magno, D. Brunelli, L. Thiele, L. Benini
{"title":"Adaptive power control for solar harvesting multimodal wireless smart camera","authors":"M. Magno, D. Brunelli, L. Thiele, L. Benini","doi":"10.1109/ICDSC.2009.5289358","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289358","url":null,"abstract":"Energy efficiency for wireless smart camera networks is one of the major efforts in the distributed monitoring and surveillance community. If video cameras are equipped with circuits that receive and convert energy from regenerative sources such as solar cells, an effective power management becomes essential for the design of small sized and perpetually powered devices, which can be deployed unattended for years and feature smart vision applications. In this paper we present a simple but optimal power management tailored for multi-modal video sensor nodes and based on model predictive controller (MPC) principles. The system is designed for low-power and low-cost video surveillance and exploits small solar cells for battery recharging and Pyroelectric InfraRed (PIR) sensors to provide low-power monitoring when the camera is not needed. The aim of this work is to show how an adaptive controller helps the system to improve its performance while outperforming naive power management policies. Simulation results and measurements on the video sensor node demonstrate the efficiency of our approach.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114555720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
A pervasive smart camera network architecture applied for multi-camera object classification 一种应用于多摄像机目标分类的普适智能摄像机网络架构
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289377
Wolfgang Schriebl, Thomas Winkler, Andreas Starzacher, B. Rinner
{"title":"A pervasive smart camera network architecture applied for multi-camera object classification","authors":"Wolfgang Schriebl, Thomas Winkler, Andreas Starzacher, B. Rinner","doi":"10.1109/ICDSC.2009.5289377","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289377","url":null,"abstract":"Visual sensor networks are an emerging research area with the goal of using cameras as pervasive and affordable sensing and processing devices. This paper presents a pervasive smart camera platform which is built from off-the-shelf hardware and software components. The hardware platform is comprised of an OMAP 3530 processor, 128 MB RAM and various interfaces for connecting sensors and peripherals. A dual-radio wireless network allows to trade communication performance for power consumption. The software architecture is built upon standard Linux and supports dataflow oriented application development by dynamically instantiating and connecting functions blocks. Data is transferred between blocks via shared memory for high throughput. We present a performance evaluation of our smart camera platform as well as a multi-camera object classification system to demonstrate the capabilities and applicability of our approach.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121514943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
PhD forum: Dempster-Shafer based camera contribution evaluation for task assignment in vision networks 博士论坛:基于Dempster-Shafer的视觉网络任务分配的相机贡献评估
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289389
M. Morbée, L. Tessens, W. Philips, H. Aghajan
{"title":"PhD forum: Dempster-Shafer based camera contribution evaluation for task assignment in vision networks","authors":"M. Morbée, L. Tessens, W. Philips, H. Aghajan","doi":"10.1109/ICDSC.2009.5289389","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289389","url":null,"abstract":"In a network of cameras, it is important that the right subset of cameras takes care of the right task. In this work, we describe a general framework to evaluate the contribution of subsets of cameras to a task. Each task is the observation of an event of interest and consists of assessing the validity of a set of hypotheses. All cameras gather evidence for those hypotheses. The evidence from different cameras is fused by using the Dempster-Shafer theory of evidence. After combining the evidence for a set of cameras, the remaining uncertainty about a set of hypotheses, allows us to identify how well a certain camera subset is suited for a certain task. Taking into account these subset contribution values, we can determine in an efficient way the set of subset-task assignments that yields the best overall task performance.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120990389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PhD forum: Calibrating and using the global network of outdoor webcams 博士论坛:校准和使用全球户外网络摄像头网络
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289404
Nathan Jacobs, Robert Pless
{"title":"PhD forum: Calibrating and using the global network of outdoor webcams","authors":"Nathan Jacobs, Robert Pless","doi":"10.1109/ICDSC.2009.5289404","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289404","url":null,"abstract":"The vast imaging resources available via the Internet are underutilized. We propose to lay the foundation for the use of cameras attached to the Internet, also known as webcams, as free and flexible sensors. Developing an understanding of the relationship between signals in the world and the image variations they cause is critical to this effort. We use this understanding to develop methods to calibrate webcams, to estimate scene properties, and to report the weather.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129635650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Planning ahead for PTZ camera assignment and handoff 提前计划PTZ摄像机的分配和交接
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289420
F. Qureshi, Demetri Terzopoulos
{"title":"Planning ahead for PTZ camera assignment and handoff","authors":"F. Qureshi, Demetri Terzopoulos","doi":"10.1109/ICDSC.2009.5289420","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289420","url":null,"abstract":"We present a visual sensor network, comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras, which automatically captures high quality surveillance video of selected pedestrians during their prolonged presence in an area of interest. A wide-FOV static camera can track multiple pedestrians, while any PTZ active camera can follow a single pedestrian at a time. The proactive control of multiple PTZ cameras is required to record seamless, high quality video of a roaming individual despite the observational constraints of the different cameras. We formulate PTZ camera assignment and handoff as a planning problem whose solution achieves optimal camera assignment with respect to predefined observational goals.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114065357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
An extension of the AVC file format for Video Surveillance 视频监控的AVC文件格式的扩展
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289347
J. Annesley, Gero Base, J. Orwell, M. S. H. Sabirin
{"title":"An extension of the AVC file format for Video Surveillance","authors":"J. Annesley, Gero Base, J. Orwell, M. S. H. Sabirin","doi":"10.1109/ICDSC.2009.5289347","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289347","url":null,"abstract":"This paper introduces and discusses a recent standardization project by the MPEG group that aims at providing an interoperable file format tailored to the needs of the surveillance industry. The new standard is called Video Surveillance Application Format and extends the H.264 — AVC file format specification. Based on a review of the application requirements, the rationale is provided for the technical design choices, and an example system implementation is described in detail. This provides interoperability at the data format and metadata levels. An example reference implementation instantiates and tests the design. The resultant is the specification of a file format suitable for large-scale camera network deployment.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126825263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mapping schemes of image recognition tasks onto highly parallel SIMD/MIMD processors 图像识别任务到高度并行SIMD/MIMD处理器的映射方案
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289350
S. Kyo, Shouhei Nomoto, S. Okazaki
{"title":"Mapping schemes of image recognition tasks onto highly parallel SIMD/MIMD processors","authors":"S. Kyo, Shouhei Nomoto, S. Okazaki","doi":"10.1109/ICDSC.2009.5289350","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289350","url":null,"abstract":"Smart camera applications based on image recognition techniques require significant levels of computation and must operate within limited power budgets. This paper focuses on the schemes of mapping image recognition tasks onto a series of low-power highly parallel SIMD/MIMD mode switching processors called IMAPCAR2. In this paper, we discuss hardware design considerations, the schemes of mapping image tasks onto the architecture using the SIMD or MIMD execution modes, and the way to choose between execution modes. Benchmark results show that the measured performance of an IMAPCAR2-300 (108 MHz, 128 PE / 32 PU, 90-nm, ≪ 1 W) processor running the compiler-generated code of programs based on the proposed mapping schemes is up to 27 times faster using the SIMD mode, or up to 2.8 times faster using the MP mode than a 1.6GHz general purpose processor that consumes a similar amount of power.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132462495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Target detection and counting using a progressive certainty map in distributed visual sensor networks 分布式视觉传感器网络中基于渐进式确定性映射的目标检测与计数
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289414
M. Karakaya, H. Qi
{"title":"Target detection and counting using a progressive certainty map in distributed visual sensor networks","authors":"M. Karakaya, H. Qi","doi":"10.1109/ICDSC.2009.5289414","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289414","url":null,"abstract":"Visual sensor networks (VSNs) merge computer vision, image processing and wireless sensor network disciplines to solve problems in multi-camera applications by providing valuable information through distributed sensing and collaborative in-network processing. Collaboration in sensor networks is necessary not only to compensate for the processing, sensing, energy, and bandwidth limitations of each sensor node but also to improve the accuracy and robustness of the sensor network. Collaborative processing in VSNs is more challenging than in conventional scalar sensor networks (SSNs) because of two unique features of cameras, including the extremely higher data rate compared to that of scalar sensors and the directional sensing characteristics with limited field of view. In this paper, we study a challenging computer vision problem, target detection and counting in VSN environment. Traditionally, the problem is solved by counting the number of intersections of the backprojected 2D cones of each target. However, the existence of visual occlusion among targets would generate many false alarms. In this work, instead of resolving the uncertainty about target existence at the intersections, we identify and study the non-occupied areas in the cone and generate the so-called certainty map of non-existence of targets. This way, after fusing inputs from a set of sensor nodes, the unresolved regions on the certainty map would be the location of target. This paper focuses on the design of a light-weight, energy-efficient, and robust solution where not only each camera node transmits a very limited amount of data but that a limited number of camera nodes is used. We propose a dynamic itinerary for certainty map integration where the entire map is progressively clarified from sensor to sensor. When the confidence of the certainty map is satisfied, a geometric counting algorithm is applied to find the estimated number of targets. In the conducted experiments using real data, the results of the proposed distributed and progressive method shows effectiveness in detection accuracy and energy and bandwidth efficiency.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132687611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
An efficient system for vehicle tracking in multi-camera networks 一种高效的多摄像机网络车辆跟踪系统
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289383
M. Dixon, Nathan Jacobs, Robert Pless
{"title":"An efficient system for vehicle tracking in multi-camera networks","authors":"M. Dixon, Nathan Jacobs, Robert Pless","doi":"10.1109/ICDSC.2009.5289383","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289383","url":null,"abstract":"The recent deployment of very large-scale camera networks has led to a unique version of the tracking problem whose goal is to detect and track every vehicle within a large urban area. To address this problem we exploit constraints inherent in urban environments (i.e. while there are often many vehicles, they follow relatively consistent paths) to create novel visual processing tools that are highly efficient in detecting cars in a fixed scene and at connecting these detections into partial tracks.We derive extensions to a network flow based probabilistic data association model to connect these tracks between cameras. Our real time system is evaluated on a large set of ground-truthed traffic videos collected by a network of seven cameras in a dense urban scene.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130378458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Color Brightness Transfer Function evaluation for non overlapping multi camera tracking 非重叠多相机跟踪的色彩亮度传递函数评价
2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) Pub Date : 2009-10-20 DOI: 10.1109/ICDSC.2009.5289365
T. D’orazio, P. Mazzeo, P. Spagnolo
{"title":"Color Brightness Transfer Function evaluation for non overlapping multi camera tracking","authors":"T. D’orazio, P. Mazzeo, P. Spagnolo","doi":"10.1109/ICDSC.2009.5289365","DOIUrl":"https://doi.org/10.1109/ICDSC.2009.5289365","url":null,"abstract":"People Tracking in multiple cameras is of great interest for wide area video surveillance systems. Multi-camera tracking with non-overlapping fields of view (FOV) involves the tracking of people in the blind region and their correspondence matching across cameras. We consider these problems in this paper. We propose a multi camera architecture for wide area surveillance and a real time people tracking algorithm across non overlapping cameras. We compared different methods to evaluate the color Brightness Transfer Function (BTF) between non overlapping cameras. These approaches are based on a testing phase during which the color histogram mapping, between pairs of images of the same object observed in the different field of views, is carried out. The experimental results compare two different transfer functions and demonstrate their limits in people association when a new person enters in one camera FOV.","PeriodicalId":324810,"journal":{"name":"2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131934228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信