2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras最新文献

筛选
英文 中文
Smart camera design for realtime high dynamic range imaging 智能相机设计,实时高动态范围成像
2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras Pub Date : 2011-10-13 DOI: 10.1109/ICDSC.2011.6042918
Pierre-Jean Lapray, B. Heyrman, M. Rossé, D. Ginhac
{"title":"Smart camera design for realtime high dynamic range imaging","authors":"Pierre-Jean Lapray, B. Heyrman, M. Rossé, D. Ginhac","doi":"10.1109/ICDSC.2011.6042918","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042918","url":null,"abstract":"Many camera sensors suffer from limited dynamic range. The result is that there is a lack of clear details in displayed images and videos. This paper describes our approach to generate high dynamic range (HDR) from an image sequence while modifying exposure times for each new frame. For this purpose, we propose an FPGA-based architecture that can produce a real-time high dynamic range video from successive image acquisition. Our hardware platform is build around a standard low dynamic range CMOS sensor and a Virtex 5 FPGA board. The CMOS sensor is a EV76C560 provided by e2v. This 1.3 Megapixel device offers novel pixel integration/readout modes and embedded image pre-processing capabilities including multiframe acquisition with various exposure times, approach consists of a pipeline of different algorithmic phases: automatic exposure control during image capture, alignment of successive images in order to minimize camera and objects movements, building of an HDR image by combining the multiple frames, and final tonemapping for viewing on a LCD display. Our aim is to achieve a realtime video rate of 25 frames per second for a full sensor resolution of 1, 280 × 1, 024 pixels.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133190390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Demo: Real-time depth extraction and viewpoint interpolation on FPGA 演示:基于FPGA的实时深度提取和视点插值
2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras Pub Date : 2011-10-13 DOI: 10.1109/ICDSC.2011.6042943
Guanyu Yi, H. Yeh, G. Vanmeerbeeck, Ke Zhang, G. Lafruit
{"title":"Demo: Real-time depth extraction and viewpoint interpolation on FPGA","authors":"Guanyu Yi, H. Yeh, G. Vanmeerbeeck, Ke Zhang, G. Lafruit","doi":"10.1109/ICDSC.2011.6042943","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042943","url":null,"abstract":"In this demo, we demonstrate a real-time viewpoint interpolation application on FPGA. Viewpoint interpolation is the process of synthesizing plausible in-between views — so-called virtual camera views — from a couple of surrounding fixed camera views. Stereo matching is used to extract depth information, by computing a disparity map from a pair of input images. With the depth information, virtual views at any points between the two cameras are computed through view interpolation. To make viewpoint interpolation possible for low/moderate-power consumer applications, a further quality/complexity tradeoff study is required to conciliate algorithmic quality to architectural performance. In essence, the inter-dependencies between the different algorithmic steps in the processing chain are thoroughly analyzed, aiming at an overall quality-performance model that pinpoints which algorithmic functionalities can be simplified with minor (preferably no) global input-output quality degradation, while maximally reducing their implementation complexity w.r.t. arithmetic and line buffer requirements. Compared to state-of-the-art CPU and GPU platforms running at several GHz clock speed, our low-power 65 MHz FPGA implementation achieves speedups with one order of magnitude over state-of-the-art, without impeding on the visual quality, reaching over 60 frames per second high-definition (1024×768) high-quality, 64-disparity search range stereo matching and enabling viewpoint interpolation in low-power, embedded applications.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128819822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A socio-economic approach to online vision graph generation and handover in distributed smart camera networks 分布式智能摄像机网络中在线视觉图生成与切换的社会经济方法
2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras Pub Date : 2011-10-13 DOI: 10.1109/ICDSC.2011.6042902
Lukas Esterle, Peter R. Lewis, Marcin Bogdański, B. Rinner, X. Yao
{"title":"A socio-economic approach to online vision graph generation and handover in distributed smart camera networks","authors":"Lukas Esterle, Peter R. Lewis, Marcin Bogdański, B. Rinner, X. Yao","doi":"10.1109/ICDSC.2011.6042902","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042902","url":null,"abstract":"In this paper we propose an approach based on self-interested autonomous cameras, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to grow the vision graph during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online which permits the addition and removal cameras to the network during runtime and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multi-camera calibration can be avoided.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115890633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
PhD forum: BiSeeMos: A fast embedded stereo smart camera 博士论坛:BiSeeMos:一个快速嵌入式立体智能相机
2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras Pub Date : 2011-10-13 DOI: 10.1109/ICDSC.2011.6042960
Frantz Pelissier, F. Berry
{"title":"PhD forum: BiSeeMos: A fast embedded stereo smart camera","authors":"Frantz Pelissier, F. Berry","doi":"10.1109/ICDSC.2011.6042960","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042960","url":null,"abstract":"This paper presents a new embedded stereo vision system called BiSeeMos. This system has been designed for fast stereo vision computation up to 160 frames per second at a resolution of 1024 by 1024 pixels. The system's heart is a Cyclone III FPGA from ALTERA Corporation with 119,088 reconfigurable logic elements, 3,888 Kbits of memory and 288 embedded multipliers. A versatile vision framework has been implemented in the system along with a Census stereo vision algorithm to validate the platform.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124433595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Demo: Real-time 3D visualization of multi-camera room occupancy monitoring for immersive communication systems 演示:用于沉浸式通信系统的多摄像头房间占用监控的实时3D可视化
2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras Pub Date : 2011-10-13 DOI: 10.1109/ICDSC.2011.6042956
Aljosha Demeulemeester, Charles-Frederik Hollemeersch, P. Lambert, R. Walle, Vedran Jelaca, Sebastian Gruenwedel, Jorge Oswaldo Niño Castañeda, Dimitri Van Cauwelaert, P. Veelaert, P. V. Hese, W. Philips
{"title":"Demo: Real-time 3D visualization of multi-camera room occupancy monitoring for immersive communication systems","authors":"Aljosha Demeulemeester, Charles-Frederik Hollemeersch, P. Lambert, R. Walle, Vedran Jelaca, Sebastian Gruenwedel, Jorge Oswaldo Niño Castañeda, Dimitri Van Cauwelaert, P. Veelaert, P. V. Hese, W. Philips","doi":"10.1109/ICDSC.2011.6042956","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042956","url":null,"abstract":"This demo paper introduces a flexible 3D visualization framework that can visualize an abstract representation of real-world events such as human movement and human interaction in an immersive way by rendering animated avatars. In the presented demo, events are detected and sent to the visualization by a multi-camera room occupancy monitoring system that uses video analysis to track people in a room. Extracting high-level information about a scene and visualizing the relevant events in a 3D virtual environment can enable future immersive communication systems.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116893595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local image quality metric for a distributed smart camera network with overlapping FOVs 具有重叠视场的分布式智能摄像机网络的局部图像质量度量
2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras Pub Date : 2011-10-13 DOI: 10.1109/ICDSC.2011.6042920
E. Shen, R. Hornsey
{"title":"Local image quality metric for a distributed smart camera network with overlapping FOVs","authors":"E. Shen, R. Hornsey","doi":"10.1109/ICDSC.2011.6042920","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042920","url":null,"abstract":"A set of camera selection templates, using simple rules based on a local (camera) level metric, are implemented for a twelve camera inward-looking distributed smart camera network. The local metric represents the quality of detection for a given camera node of the target-of-interest and is based on a measurable target parameter. To understand the effectiveness of the camera selections, an analytical framework consisting of a global (system) level metric has been designed. The camera selection methods are able to maintain a desirable global metric performance while using a subset of the total cameras available. This is true even when the system undergoes perturbation by the loss of a single camera or by a single occluding target.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125205316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fusion of multiple trackers in video networks 视频网络中多跟踪器的融合
2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras Pub Date : 2011-10-13 DOI: 10.1109/ICDSC.2011.6042927
Yiming Li, B. Bhanu
{"title":"Fusion of multiple trackers in video networks","authors":"Yiming Li, B. Bhanu","doi":"10.1109/ICDSC.2011.6042927","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042927","url":null,"abstract":"In this paper, we address the camera selection problem by fusing the performance of multiple trackers. Currently, all the camera selection/hand-off approaches largely depend on the performance of the tracker deployed to decide when to hand-off from one camera to another. However, a slight inaccuracy of the tracker may pass the wrong information to the system such that the wrong camera may be selected and error may be propagated. We present a novel approach to use multiple state-of-the-art trackers based on different features and principles to generate multiple hypotheses and fuse the performance of multiple trackers for camera selection. The proposed approach has very low computational overhead and can achieve real-time performance. We perform experiments with different numbers of cameras and persons on different datasets to show the superior results of the proposed approach. We also compare results with a single tracker to show the merits of integrating results from multiple trackers.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126924526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Distributed smart cameras for hard real-time obstacle detection in control applications 用于控制应用中硬实时障碍物检测的分布式智能摄像机
2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras Pub Date : 2011-10-13 DOI: 10.1109/ICDSC.2011.6042935
Herwig Guggi, B. Rinner
{"title":"Distributed smart cameras for hard real-time obstacle detection in control applications","authors":"Herwig Guggi, B. Rinner","doi":"10.1109/ICDSC.2011.6042935","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042935","url":null,"abstract":"This paper describes the integration of distributed image analysis in a smart camera network with a control application. The distributed smart cameras analyze the environment of the controlled object and transfer accurate position and orientation information of all detected obstacles to the controller within guaranteed time bounds. With this information the controller can optimize the trajectory of the load. We present two different approaches for estimating the bounding boxes for detected obstacles and compare their benefits and drawbacks.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124380516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
PhD forum: A cyber-physical system approach to embedded visual servoing 博士论坛:嵌入式视觉伺服的网络物理系统方法
2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras Pub Date : 2011-10-13 DOI: 10.1109/ICDSC.2011.6042950
Zhenyu Ye, H. Corporaal, P. Jonker
{"title":"PhD forum: A cyber-physical system approach to embedded visual servoing","authors":"Zhenyu Ye, H. Corporaal, P. Jonker","doi":"10.1109/ICDSC.2011.6042950","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042950","url":null,"abstract":"Visual servoing, which applies computer vision as a feedback source for control, is becoming a cost effective solution for high performance mechatronic systems. However, the potential of visual servoing systems is limited by the current design methodology, which explores the cyber domain and the physical domain separately. We propose to use a cyber-physical system approach to overcome such limitation.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132347002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Real-time multi-view human action recognition using a wireless camera network 利用无线摄像机网络进行实时多视角人体动作识别
2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras Pub Date : 2011-10-13 DOI: 10.1109/ICDSC.2011.6042901
Sricharan Ramagiri, R. Kavi, V. Kulathumani
{"title":"Real-time multi-view human action recognition using a wireless camera network","authors":"Sricharan Ramagiri, R. Kavi, V. Kulathumani","doi":"10.1109/ICDSC.2011.6042901","DOIUrl":"https://doi.org/10.1109/ICDSC.2011.6042901","url":null,"abstract":"In this paper, we describe how information obtained from multiple views using a network of cameras can be effectively combined to yield a reliable and fast human action recognition system. We describe a score-based fusion technique for combining information from multiple cameras that can handle arbitrary orientation of the subject with respect to the cameras. Our fusion technique does not rely on a symmetric deployment of the cameras and does not require that camera network deployment configuration be preserved between training and testing phases. To classify human actions, we use motion information characterized by the spatio-temporal shape of a human silhouette over time. By relying on feature vectors that are relatively easy to compute, our technique lends itself to an efficient distributed implementation while maintaining a high frame capture rate. We evaluate the performance of our system under different camera densities and view availabilities. Finally, we demonstrate the performance of our system in an online setting where the camera network is used to identify human actions as they are being performed.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122396490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信