Proceedings of the 13th International Conference on Distributed Smart Cameras最新文献

筛选
英文 中文
A Battery Powered Vision Sensor for Forensic Evidence Gathering 一种用于法医证据收集的电池供电视觉传感器
Proceedings of the 13th International Conference on Distributed Smart Cameras Pub Date : 2019-09-09 DOI: 10.1145/3349801.3349816
Yu Zou, M. Lecca, M. Gottardi, G. Urlini, N. Vretos, L. Gymnopoulos, P. Daras
{"title":"A Battery Powered Vision Sensor for Forensic Evidence Gathering","authors":"Yu Zou, M. Lecca, M. Gottardi, G. Urlini, N. Vretos, L. Gymnopoulos, P. Daras","doi":"10.1145/3349801.3349816","DOIUrl":"https://doi.org/10.1145/3349801.3349816","url":null,"abstract":"We describe a novel battery-powered vision sensor developed to support surveillance and crime prevention activities of the Law Enforcement Agencies (LEA) in isolated or peripheral areas not equipped with energy grid. The sensor consists of a low-power, always-on vision chip interfaced with a processor executing visual tasks on demand. The chip continuously inspects the imaged scene in search for events potentially related to criminal acts. When an event is detected, the chip wakes-up the processor, normally in idle state, and starts delivering images to it together with information on the region containing the event. The processor re-works the received data in order to confirm, to recognize the detected action and in case to send the alert to the LEA. The sensor has been developed within a H2020 EU project and has been successfully tested in real-life scenarios.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128283926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised continuous camera network pose estimation through human mesh recovery 基于人网格恢复的无监督连续摄像机网络姿态估计
Proceedings of the 13th International Conference on Distributed Smart Cameras Pub Date : 2019-09-09 DOI: 10.1145/3349801.3349803
Nicola Garau, N. Conci
{"title":"Unsupervised continuous camera network pose estimation through human mesh recovery","authors":"Nicola Garau, N. Conci","doi":"10.1145/3349801.3349803","DOIUrl":"https://doi.org/10.1145/3349801.3349803","url":null,"abstract":"Camera resectioning is essential in computer vision and 3D reconstruction to estimate the position of matching pinhole cameras in 3D worlds. While the internal camera parameters are usually known or can be easily computed offline, in camera networks extrinsic parameters need to be computed each time a camera changes position, thus not allowing for smooth and dynamic network reconfiguration. In this work we propose a fully markerless, unsupervised, and automatic tool for the estimation of the extrinsic parameters of a camera network, based on 3D human mesh recovery from RGB videos. We show how it is possible to retrieve, from monocular images and with just a weak prior knowledge of the intrinsic parameters, the real-world position of the cameras in the network, together with the floor plane. Our solution also works with a single RGB camera and allows the user to dynamically add, re-position, or remove cameras from the network.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131718215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Displacement Error Analysis of 6-DoF Virtual Reality 六自由度虚拟现实的位移误差分析
Proceedings of the 13th International Conference on Distributed Smart Cameras Pub Date : 2019-09-09 DOI: 10.1145/3349801.3349812
Ridvan Aksu, Jacob Chakareski, V. Velisavljevic
{"title":"Displacement Error Analysis of 6-DoF Virtual Reality","authors":"Ridvan Aksu, Jacob Chakareski, V. Velisavljevic","doi":"10.1145/3349801.3349812","DOIUrl":"https://doi.org/10.1145/3349801.3349812","url":null,"abstract":"Virtual view synthesis is a critical step in enabling Six-Degrees of Freedom (DoF) immersion experiences in Virtual Reality (VR). It comprises synthesis of virtual viewpoints for a user navigating the immersion environment, based on a small subset of captured viewpoints featuring texture and depth maps. We investigate the extreme values of the displacement error in view synthesis caused by depth map quantization, for a given 6DoF VR video dataset, particularly based on the camera settings, scene properties, and the depth map quantization error. We establish a linear relationship between the displacement error and the quantization error, scaled by the sine of the angle between the location of the object and the virtual view in the 3D scene, formed at the reference camera location. In the majority of cases the horizontal and vertical displacement errors induced at a pixel location of a reconstructed 360° viewpoint comprising the immersion environment are respectively proportional to 3/5 and 1/5 of the respective quantization error. Also, the distance between the reference view and the synthesized view severely increases the displacement error. Following these observations: displacement error values can be predicted for given pixel coordinates and quantization error, and this can serve as a first step towards modeling the relationship between the encoding rate of reference views and the quality of synthesized views.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"94 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133847094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Design Exploration of Multi-Camera Dome 多摄像头穹顶的设计探索
Proceedings of the 13th International Conference on Distributed Smart Cameras Pub Date : 2019-09-09 DOI: 10.1145/3349801.3349808
Hiba H. Alqaysi, N. Lawal, I. Fedorov, Benny Thörnberg, M. O’nils
{"title":"Design Exploration of Multi-Camera Dome","authors":"Hiba H. Alqaysi, N. Lawal, I. Fedorov, Benny Thörnberg, M. O’nils","doi":"10.1145/3349801.3349808","DOIUrl":"https://doi.org/10.1145/3349801.3349808","url":null,"abstract":"Visual monitoring systems employ distributed smart cameras to effectively cover a given area satisfying specific objectives. The choice of camera sensors and lenses and their deployment affects design cost, accuracy of the monitoring system and the ability to position objects within the monitored area. Design cost can be reduced by investigating deployment topology such as grouping cameras together to form a dome at a node and optimize it for monitoring constraints. The constraints may include coverage area, number of cameras that can be integrated in a node and pixel resolution at a given distance. This paper presents a method for optimizing the design cost of multicamera dome by analyzing tradeoffs between monitoring constraints. The proposed method can be used to reduce monitoring cost while fulfilling design objectives. Results show how to increase coverage area for a given cost by relaxing requirements on design constraints. Multi-camera domes can be used in sky monitoring applications such as monitoring wind parks and remote air-traffic control of airports where all-round field of view about a point is required to be monitored.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126654928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Face Liveness Detection Benchmark based on Stereo Matching 基于立体匹配的人脸活力检测基准
Proceedings of the 13th International Conference on Distributed Smart Cameras Pub Date : 2019-09-09 DOI: 10.1145/3349801.3349811
Mi Shi, Jiaming Sun, Zhenhuan Huang, Hainan Wang, Chunlei Liu, Baochang Zhang
{"title":"Face Liveness Detection Benchmark based on Stereo Matching","authors":"Mi Shi, Jiaming Sun, Zhenhuan Huang, Hainan Wang, Chunlei Liu, Baochang Zhang","doi":"10.1145/3349801.3349811","DOIUrl":"https://doi.org/10.1145/3349801.3349811","url":null,"abstract":"In this paper, a face liveness detection benchmark is established and maintained, wherein 400 images pairs captured with binocular camera are made openly available for research purposes. This image dataset contains numbers of people with varied expressions, illumination, and background environment conditions, etc., among which 200 image pairs characterize lively human faces, and the other half are planar face pictures. The benchmark provides a platform for researchers to test stereo matching algorithms for liveness detection, where the detection performance is evaluated via a binary classification on the detection response for being a lively human or not. The feasibility of SIFT features are verified based on a comparative analysis of the classification result, and a set of optimal parameters for the classification is given which provides a reference for further research. * denotes the equal contributions.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115208463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate Single-Stream Action Detection in Real-Time 实时准确的单流动作检测
Proceedings of the 13th International Conference on Distributed Smart Cameras Pub Date : 2019-09-09 DOI: 10.1145/3349801.3349821
Yu Liu, Fan Yang, D. Ginhac
{"title":"Accurate Single-Stream Action Detection in Real-Time","authors":"Yu Liu, Fan Yang, D. Ginhac","doi":"10.1145/3349801.3349821","DOIUrl":"https://doi.org/10.1145/3349801.3349821","url":null,"abstract":"Analyzing videos of human actions involves understanding the spatial and temporal context of the scenes. State-of-the-art action detection approaches have demonstrated impressive results using Convolutional Neural Networks (CNNs) within a two-stream framework. However, most of them operate in a non-real-time, offline fashion, thus are not well-equipped in many emerging real-world scenarios such as autonomous driving and public surveillance. In addition, they are computationally demanding to be deployed on devices with limited power resources (e.g., embedded systems). To address the above challenges, we propose an efficient single-stream action detection framework by exploiting temporal coherence between successive video frames. This allows CNN appearance features to be cheaply propagated by motions rather than being extracted from every frame. Furthermore, we utilize an implicit motion representation to amplify appearance features. Our method based on motion-guided and motion-aware appearance features is evaluated on the UCF-101-24 dataset. Experiments indicate that the proposed method can achieve real-time action detection up to 32 fps with a comparable accuracy as the two-stream approach.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116989291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic Generation of Waypoint Graphs from Distributed Ceiling-Mounted Smart Cameras for Decentralized Multi-Robot Indoor Navigation 分布式顶置智能摄像头的路点图自动生成用于分散多机器人室内导航
Proceedings of the 13th International Conference on Distributed Smart Cameras Pub Date : 2019-09-09 DOI: 10.1145/3349801.3349814
Andrew Felder, Dillon Van Buskirk, C. Bobda
{"title":"Automatic Generation of Waypoint Graphs from Distributed Ceiling-Mounted Smart Cameras for Decentralized Multi-Robot Indoor Navigation","authors":"Andrew Felder, Dillon Van Buskirk, C. Bobda","doi":"10.1145/3349801.3349814","DOIUrl":"https://doi.org/10.1145/3349801.3349814","url":null,"abstract":"This work addresses decentralized coordination of autonomous robots with assistance from overhead map-building and path planning cameras. We propose our solution, Decentralized Indoor Smart Mapping and Hierarchical Navigation (DISCMAHN). We propose an algorithm to generate a waypoint map for each overhead camera's region in real time. Furthermore, we propose a modified A* to perform fully-decentralized path planning. Waypoint generation was simulated to ensure it is both effective and efficient. Path planning was simulated on various, randomized environments to show effectiveness. Our method efficiently handles the cases where other robot navigation methods are otherwise weak and ineffective.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127301285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards an FPGA-Based Smart Camera for Virtual Reality Applications 一种基于fpga的虚拟现实智能相机
Proceedings of the 13th International Conference on Distributed Smart Cameras Pub Date : 2019-09-09 DOI: 10.1145/3349801.3357133
Antonio Pérez Cruz, Abiel Aguilar-González, Madaín Pérez Patricio
{"title":"Towards an FPGA-Based Smart Camera for Virtual Reality Applications","authors":"Antonio Pérez Cruz, Abiel Aguilar-González, Madaín Pérez Patricio","doi":"10.1145/3349801.3357133","DOIUrl":"https://doi.org/10.1145/3349801.3357133","url":null,"abstract":"Virtual reality (VR) is an experience taking place within simulated and immersive environments. Although in recent years several virtual reality applications such as, virtual reality gaming, medical educational and military training applications have been developed; one important limitation still remains for the tracking sensor. Commercial headsets such as the Oculus Rift or HTC Vive have tracking sensors which project active signals to the user's body and limits the motion understanding. To address this problem, we propose a novel passive sensor (which consist of an FPGA-based smart camera) which computes the optical flow an estimates semantic information about the user movement inside the camera fabric. Then, using these semantic information as feedback for the virtual reality engine; accurate tracking without active signals being projected to the user's body and with the capability to implement several cameras in order to achieve a better movement understanding is possible. Preliminary results are encourageous, demonstrating the possibility of a visual-based tracking approach suitable for virtual reality applications.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126936134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Exploration of the Interaction Between capsules with ResNetCaps models 用ResNetCaps模型探索胶囊之间的相互作用
Proceedings of the 13th International Conference on Distributed Smart Cameras Pub Date : 2019-09-09 DOI: 10.1145/3349801.3349804
Rita Pucci, C. Micheloni, V. Roberto, G. Foresti, N. Martinel
{"title":"An Exploration of the Interaction Between capsules with ResNetCaps models","authors":"Rita Pucci, C. Micheloni, V. Roberto, G. Foresti, N. Martinel","doi":"10.1145/3349801.3349804","DOIUrl":"https://doi.org/10.1145/3349801.3349804","url":null,"abstract":"Image recognition is an open challenge in computer vision since its early stages. The application of deep neural networks yielded significant improvements towards its solution. Despite their classification abilities, deep networks need datasets with thousands of labelled images and prohibitive computational capabilities to achieve good performance. To address some of these challenges, the CapsNet neural architecture has been recently proposed as a promising machine learning model for image classification based on the idea of capsules. A capsule is a group of neurons whose output represents the presence of features of the same entity. In this paper, we start from the CapsNet architecture to explore and analyse the interaction between the presence of features within certain, similar classes. This is achieved by means of techniques for the features interaction, working on the outputs of two independent capsule-based models. To understand the importance of the interaction between capsules, extensive experiments have been carried out on four challenging dataset. Results show that the exploitation of capsules interaction yields to performance improvements.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"96 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126111876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Screening Early Children with Autism Spectrum Disorder via Expressing Needs with Index Finger Pointing 通过食指指动表达需求来筛选早期自闭症谱系障碍儿童
Proceedings of the 13th International Conference on Distributed Smart Cameras Pub Date : 2019-09-09 DOI: 10.1145/3349801.3349826
Zhiyong Wang, Kai Xu, Honghai Liu
{"title":"Screening Early Children with Autism Spectrum Disorder via Expressing Needs with Index Finger Pointing","authors":"Zhiyong Wang, Kai Xu, Honghai Liu","doi":"10.1145/3349801.3349826","DOIUrl":"https://doi.org/10.1145/3349801.3349826","url":null,"abstract":"It is evident that the prevalence of the autism spectrum disorder (ASD) has been in a worrying situation with an average rate of 1% around the world. However, existing clinical diagnostic methods for ASD and medical conditions are far from adequate for a wide range of early screening and diagnosis, especially in remote areas. In this paper, a detailed protocol aiming to describe the clinical task - Expressing Needs with Index Finger Pointing (ENIFP) is proposed and a multi-sensor vision system is developed to record and analyze the performance of the children. Mutual gaze and gesture are considered to be the main basis to judge the performance of the children. The improved SSD algorithm is applied to locate the hands and recognize gestures. Eight subjects including 5 typically developed adults and 3 children (2 ASD and 1 non-ASD) are invited to the experiment. The result shows that the system can check the mutual gaze and recognize the gesture - index finger pointing accurately and demonstrates the proposed system has the potential to assist screen ASD.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124432590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信