Proceedings of the 9th International Conference on Distributed Smart Cameras最新文献

筛选
英文 中文
Mean field variational inference using bregman ADMM for distributed camera network 基于bregman ADMM的分布式摄像机网络平均场变分推理
Proceedings of the 9th International Conference on Distributed Smart Cameras Pub Date : 2015-09-08 DOI: 10.1145/2789116.2802656
Behnam Babagholami-Mohamadabadi, Sejong Yoon, V. Pavlovic
{"title":"Mean field variational inference using bregman ADMM for distributed camera network","authors":"Behnam Babagholami-Mohamadabadi, Sejong Yoon, V. Pavlovic","doi":"10.1145/2789116.2802656","DOIUrl":"https://doi.org/10.1145/2789116.2802656","url":null,"abstract":"Bayesian models provide a framework for probabilistic modelling of complex datasets. However, many of such models are computationally demanding especially in the presence of large datasets. On the other hand, in sensor network applications, statistical (Bayesian) parameter estimation usually needs distributed algorithms, in which both data and computation are distributed across the nodes of the network. In this paper we propose a general framework for distributed Bayesian learning using Bregman Alternating Direction Method of Multipliers (B-ADMM). We demonstrate the utility of our framework, with Mean Field Variational Bayes (MFVB) as the primitive for distributed affine structure from motion (SfM).","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116259889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Accelerating FPGA-based object detection via a visual information extraction cascade 通过视觉信息提取级联加速基于fpga的目标检测
Proceedings of the 9th International Conference on Distributed Smart Cameras Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789147
C. Kyrkou, T. Theocharides
{"title":"Accelerating FPGA-based object detection via a visual information extraction cascade","authors":"C. Kyrkou, T. Theocharides","doi":"10.1145/2789116.2789147","DOIUrl":"https://doi.org/10.1145/2789116.2789147","url":null,"abstract":"Object detection is a major step in several computer vision applications. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware fabric (Field Programmable Gate Arrays -- FPGAs), and relevant research has produce quite fascinating results, in both accuracy of the detection algorithm, as well as the performance in terms of frames per second (FPS) for use in embedded systems. Detecting objects in images however, is a daunting task, and involves steps which are hardware- inefficient, both in terms of the datapath design and in terms of input/output and memory accesses. In this work, we present how a visual information extraction cascade composed of disparity estimation, edge detection and motion detection, can help in significantly reducing the data that needs to be computed. As such, it can reduce the power consumption while improving the performance of object detection algorithms. Initial results indicate data search reduction of up to 87% in the best case, with an average of more than 50%.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129451588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A camera uncertainty model for collaborative visual sensor network applications 协同视觉传感器网络中的摄像机不确定性模型
Proceedings of the 9th International Conference on Distributed Smart Cameras Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789130
C. Kyrkou, E. Christoforou, T. Theocharides, C. Panayiotou, M. Polycarpou
{"title":"A camera uncertainty model for collaborative visual sensor network applications","authors":"C. Kyrkou, E. Christoforou, T. Theocharides, C. Panayiotou, M. Polycarpou","doi":"10.1145/2789116.2789130","DOIUrl":"https://doi.org/10.1145/2789116.2789130","url":null,"abstract":"Visual Sensor Networks (VSNs) exploit the processing and communication capabilities of modern smart cameras to handle a variety of applications such as security and surveillance, industrial monitoring, and critical infrastructure protection. The performance of VSNs can be severely degraded because of errors in the detection module. As a result, the performance of the higher-level application such as activity recognition, tracking, etc., also suffers due to the fact that in most cases the decision making process in VSNs assumes ideal detection capabilities for the cameras. Realizing that it is necessary to introduce robustness in the decision process this paper presents results towards uncertainty-aware VSNs. Specifically, we introduce a flexible uncertainty model that can be used to study the behaviour of missed detections in a camera network. We also show how to utilize the model to develop uncertainty-aware coordination and decision making solutions to improve the efficiency of VSNs. Our experimental results in an active vision application indicate that the proposed solution is able to improve the robustness and reliability of VSNs.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129793671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Discriminative poses for early recognition in multi-camera networks 多相机网络中判别姿态的早期识别
Proceedings of the 9th International Conference on Distributed Smart Cameras Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789117
Scott Spurlock, Junjie Shan, Richard Souvenir
{"title":"Discriminative poses for early recognition in multi-camera networks","authors":"Scott Spurlock, Junjie Shan, Richard Souvenir","doi":"10.1145/2789116.2789117","DOIUrl":"https://doi.org/10.1145/2789116.2789117","url":null,"abstract":"We present a framework for early action recognition in a multi-camera network. Our approach balances recognition accuracy with speed by dynamically selecting the best camera for classification. We follow an iterative clustering approach to learn sets of keyposes that are discriminative for recognition as well as for predicting the best camera for classification of future frames. Experiments on multi-camera datasets demonstrate the applicability of our view-shifting framework to the problem of early recognition.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131869083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On filter banks of texture features for mobile food classification 移动食品分类中纹理特征滤波器库的研究
Proceedings of the 9th International Conference on Distributed Smart Cameras Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789132
N. Martinel, C. Piciarelli, C. Micheloni, G. Foresti
{"title":"On filter banks of texture features for mobile food classification","authors":"N. Martinel, C. Piciarelli, C. Micheloni, G. Foresti","doi":"10.1145/2789116.2789132","DOIUrl":"https://doi.org/10.1145/2789116.2789132","url":null,"abstract":"Nowadays obesity has become one of the most common diseases in many countries. To face it, obese people should constantly monitor their daily meals both for self-limitation and to provide useful statistics for their dietitians. This has led to the recent rise in popularity of food diary applications on mobile devices, where the users can manually annotate their food intake. To overcome the tediousness of such a process, several works on automatic image food recognition have been proposed, typically based on texture features extraction and classification. In this work, we analyze different texture filter banks to evaluate their performances and propose a method to automatically aggregate the best features for food classification purposes. Particular emphasis is put in the computational burden of the system to match the limited capabilities of mobile devices.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133537426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Distributed adaptive task allocation for energy conservation in camera sensor networks 面向摄像机传感器网络节能的分布式自适应任务分配
Proceedings of the 9th International Conference on Distributed Smart Cameras Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789131
C. Kyrkou, T. Theocharides, C. Panayiotou, M. Polycarpou
{"title":"Distributed adaptive task allocation for energy conservation in camera sensor networks","authors":"C. Kyrkou, T. Theocharides, C. Panayiotou, M. Polycarpou","doi":"10.1145/2789116.2789131","DOIUrl":"https://doi.org/10.1145/2789116.2789131","url":null,"abstract":"Camera Sensor Networks (CSNs) have a large and diverse application spectrum ranging from security and safety-critical applications, to industrial monitoring, and augmented reality. Cameras in such networks are equipped with real-time multitasking processors and communication infrastructure, which enables them to perform various computer vision tasks in a distributed and collaborative manner. In many cases, the cameras in the network operate under limited or unreliable power sources. Therefore in order to extend the CSN lifetime it is important to manage the energy consumption of the cameras, which is related to the workload of the vision tasks they perform. Hence by managing and assigning vision tasks to cameras in an energy-aware manner it is possible to extend the network lifetime. In this paper we address this problem by proposing a distributed market-based solution where cameras bid for tasks using an energy-aware utility function. An additional novelty of the proposed solution is that as the cameras can adapt their bidding strategy based on their remaining energy levels. The results for different CSN configurations and setups show that the proposed methodology can increase network lifetime by 10%-30% while improving the number of dynamic and static tasks being monitored by 30-50%.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129093550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
High performance multi-camera tracking using shapes-from-silhouettes and occlusion removal 使用轮廓形状和遮挡去除的高性能多相机跟踪
Proceedings of the 9th International Conference on Distributed Smart Cameras Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789127
Maarten Slembrouck, Jorge Oswaldo Niño Castañeda, Gianni Allebosch, Dimitri Van Cauwelaert, P. Veelaert, W. Philips
{"title":"High performance multi-camera tracking using shapes-from-silhouettes and occlusion removal","authors":"Maarten Slembrouck, Jorge Oswaldo Niño Castañeda, Gianni Allebosch, Dimitri Van Cauwelaert, P. Veelaert, W. Philips","doi":"10.1145/2789116.2789127","DOIUrl":"https://doi.org/10.1145/2789116.2789127","url":null,"abstract":"Reliable indoor tracking of objects and persons is still a major challenge in computer vision. As GPS is unavailable indoors, other methods have to be used. Multi-camera systems using colour cameras is one approach to tackle this problem. In this paper we will present a method based on shapes-from-silhouettes where the foreground/background segmentation videos are produced with state of the art methods. We will show that our tracker outperforms all the other trackers we evaluated and obtains an accuracy of 97.89% within 50 cm from the ground truth position on the proposed dataset.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123635959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Efficient foreground-background segmentation using local features for object detection 利用局部特征进行目标检测的高效前景-背景分割
Proceedings of the 9th International Conference on Distributed Smart Cameras Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789136
F. Carrara, Giuseppe Amato, F. Falchi, C. Gennaro
{"title":"Efficient foreground-background segmentation using local features for object detection","authors":"F. Carrara, Giuseppe Amato, F. Falchi, C. Gennaro","doi":"10.1145/2789116.2789136","DOIUrl":"https://doi.org/10.1145/2789116.2789136","url":null,"abstract":"In this work, a local feature based background modelling for background-foreground feature segmentation is presented. In local feature based computer vision applications, a local feature based model presents advantages with respect to classical pixel-based ones in terms of informativeness, robustness and segmentation performances. The method discussed in this paper is a block-wise background modelling where we propose to store the positions of only most frequent local feature configurations for each block. Incoming local features are classified as background or foreground depending on their position with respect to stored configurations. The resulting classification is refined applying a block-level analysis. Experiments on public dataset were conducted to compare the presented method to classical pixel-based background modelling.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124732854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Building low-cost wireless image sensor networks: from single camera to multi-camera system 构建低成本无线图像传感器网络:从单摄像头到多摄像头系统
Proceedings of the 9th International Conference on Distributed Smart Cameras Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789118
C. Pham, V. Lecuire
{"title":"Building low-cost wireless image sensor networks: from single camera to multi-camera system","authors":"C. Pham, V. Lecuire","doi":"10.1145/2789116.2789118","DOIUrl":"https://doi.org/10.1145/2789116.2789118","url":null,"abstract":"Wireless Image Sensor Networks (WISN) where sensor nodes are equipped with miniaturized visual CMOS cameras to provide visual information is a promising technology for situation awareness, search&rescue or intrusion detection applications. In this paper, we present an off-the-shelf image sensor based on Arduino boards with a CMOS uCamII camera. The image sensor works with raw 128×128 image, implements an image change detection mechanism based on simple-differencing technique and integrates a packet loss-tolerant image compression technique that can run on very limited memory platforms. We detail the performance and energy consumption measures of the various image platforms and highlight how both medium-end and low-end platforms can be supported. From the single-camera system, we describe the extension to a multi-camera system which provides omnidirectional sensing at a very lost cost for large-scale deployment.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":" 26","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132041862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Distributed multi target tracking in camera networks using sigma point information filters 基于sigma点信息滤波的摄像机网络分布式多目标跟踪
Proceedings of the 9th International Conference on Distributed Smart Cameras Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789143
K. Kumar, K. Ramakrishnan, G. Rathna
{"title":"Distributed multi target tracking in camera networks using sigma point information filters","authors":"K. Kumar, K. Ramakrishnan, G. Rathna","doi":"10.1145/2789116.2789143","DOIUrl":"https://doi.org/10.1145/2789116.2789143","url":null,"abstract":"Multiple target tracking is an important problem in analysing video data in camera networks. Distributed processing is a promising scheme to deal with huge volume of video data in camera networks. This paper addresses the problem of distributed multiple target tracking in camera networks. Each camera shares measurements with its immediate neighbours and performs inter-camera measurement-to-measurement association in distributed manner. The measurements are assigned to the targets using the nearest neighbourhood principle. To update the target state we use probabilistic data association with sigma point information filters. This filter is integrated with a consensus algorithm to develop distributed multi target tracking algorithm. We evaluated the proposed algorithm on various real world datasets and show that our algorithm outperforms the other related state-of-art distributed algorithms.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133395448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信