Alfredo Gardel Vicente, Jorge García, I. B. Muñoz, F. Espinosa, T. Chateau
{"title":"Camera calibration parameters for oriented person re-identification","authors":"Alfredo Gardel Vicente, Jorge García, I. B. Muñoz, F. Espinosa, T. Chateau","doi":"10.1145/2789116.2789138","DOIUrl":"https://doi.org/10.1145/2789116.2789138","url":null,"abstract":"Person re-identification is a challenging task when there exist strong appearance changes for different viewpoints of the person captured by a distributed camera network. To better solve this issue a multi-view oriented model of the person has been proposed. In this paper, we analyze the camera calibration parameters required to be used in the oriented people re-identification and propose a method to retrieve those values to be used in the capture of people perspectives with different known orientations respect to the camera. Usually, individual camera calibration parameters on a large distributed camera network are not available. A self-calibration method through the usage of short-term trackers of multiple persons is proposed. Only two extrinsic camera calibration parameters are required. Experimental results based on the processing of different public datasets demonstrate the effectiveness of our approach.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121215492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Goossens, Jonas De Vylder, S. Donné, W. Philips
{"title":"Quasar - a new programming framework for real-time image/video processing on GPU and CPU","authors":"B. Goossens, Jonas De Vylder, S. Donné, W. Philips","doi":"10.1145/2789116.2802654","DOIUrl":"https://doi.org/10.1145/2789116.2802654","url":null,"abstract":"In this demonstration, we present a new programming framework, Quasar, for heterogeneous programming on CPU and single/multi-GPU. Our programming framework consists of a high-level language that is aimed at relieving the programmer from hardware-related implementation issues that commonly occur in CPU/GPU programming, allowing the programmer to focus on the specification, the design, testing and the improvement of the algorithms. We will demonstrate a real-time multi-camera processing application using our integrated development environment (IDE). The IDE offers various image/video processing-related debugging functions and performance profiling features.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124958501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Li, Maarten Slembrouck, Francis Deboeverie, A. Bernardos, J. Besada, P. Veelaert, H. Aghajan, W. Philips, J. Casar
{"title":"A hybrid pose tracking approach for handheld augmented reality","authors":"Juan Li, Maarten Slembrouck, Francis Deboeverie, A. Bernardos, J. Besada, P. Veelaert, H. Aghajan, W. Philips, J. Casar","doi":"10.1145/2789116.2789128","DOIUrl":"https://doi.org/10.1145/2789116.2789128","url":null,"abstract":"With the rapid advances in mobile computing, handheld Augmented Reality draws increasing attention. Pose tracking of handheld devices is of fundamental importance to register virtual information with the real world and is still a crucial challenge. In this paper, we present a low-cost, accurate and robust approach combining fiducial tracking and inertial sensors for handheld pose tracking. Two LEDs are used as fiducial markers to indicate the position of the handheld device. They are detected by an adaptive thresholding method which is robust to illumination changes, and then tracked by a Kalman filter. By combining inclination information provided by the on-device accelerometer, 6 degree-of-freedom (DoF) pose is estimated. Handheld devices are freed from computer vision processing, leaving most computing power available for applications. When one LED is occluded, the system is still able to recover the 6-DoF pose. Performance evaluation of the proposed tracking approach is carried out by comparing with the ground truth data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved an accuracy of 1.77 cm in position estimation and 4.15 degrees in orientation estimation.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133199630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scott Spurlock, Peter Malmgren, Hui Wu, Richard Souvenir
{"title":"Multi-camera head pose estimation using an ensemble of exemplars","authors":"Scott Spurlock, Peter Malmgren, Hui Wu, Richard Souvenir","doi":"10.1145/2789116.2789123","DOIUrl":"https://doi.org/10.1145/2789116.2789123","url":null,"abstract":"We present a method for head pose estimation for moving targets in multi-camera environments. Our approach utilizes an ensemble of exemplar classifiers for joint head detection and pose estimation and provides finer-grained predictions than previous approaches. We incorporate dynamic camera selection, which allows a variable number of cameras to be selected at each time step and provides a tunable trade-off between accuracy and speed. On a benchmark dataset for multi-camera head pose estimation, our method predicts head pan angle with a mean absolute error of ~ 8° for different moving targets.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129338297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video-based activity level recognition for assisted living using motion features","authors":"Sandipan Pal, G. Abhayaratne","doi":"10.1145/2789116.2789140","DOIUrl":"https://doi.org/10.1145/2789116.2789140","url":null,"abstract":"Activities of daily living of the elderly is often monitored using passive sensor networks. With the reduction of camera prices, there is a growing interest of video-based approaches to provide a smart, safe and independent living environment for the elderly. In this paper, activity level in context of tracking the movement pattern of an individual as a metric to monitor the daily living of the elderly is explored. Activity levels can be an effective indicator that would denote the amount of busyness of an individual by modelling motion features over time. The novel framework uses two different variants of the motion features captured from two camera angles and classifies them into different activity levels using neural networks. A new dataset for assisted living research called the Sheffield Activities of Daily Living (SADL) dataset is used where each activity is simulated by 6 subjects and is captured under two different illumination conditions within a simulated assisted living environment. The experiments show that the overall detection rate using a single camera setup and a dual camera setup is above 80%.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122832058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingzhe Xie, Dimitri Van Cauwelaert, Maarten Slembrouck, Karel Bauters, Johannes Cottyn, D. V. Haerenborgh, H. Aghajan, P. Veelaert, W. Philips
{"title":"Abnormal work cycle detection based on dissimilarity measurement of trajectories","authors":"Xingzhe Xie, Dimitri Van Cauwelaert, Maarten Slembrouck, Karel Bauters, Johannes Cottyn, D. V. Haerenborgh, H. Aghajan, P. Veelaert, W. Philips","doi":"10.1145/2789116.2789142","DOIUrl":"https://doi.org/10.1145/2789116.2789142","url":null,"abstract":"This paper proposes a method for detecting the abnormalities of the executed work cycles for the factory workers using their tracks obtained in a multi-camera network. The method allows analyzing both spatial and temporal dissimilarity between the pairwise tracks. The main novelty of the methods is calculating spatial dissimilarity between pair-wise tracks by aligning them using Dynamic Time Warping (DTW) based on coordinate distance, and specially the velocity and dwell time dissimilarity using a different track alignment based on velocity difference. These dissimilarity measurements are used to cluster the executed work cycles and detect abnormalities. The experimental results show that our algorithm outperforms other methods on clustering the tracks because of the use of temporal dissimilarity.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117335083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luca Maggiani, C. Bourrasset, F. Berry, J. Sérot, M. Petracca, C. Salvadori
{"title":"Parallel image gradient extraction core for FPGA-based smart cameras","authors":"Luca Maggiani, C. Bourrasset, F. Berry, J. Sérot, M. Petracca, C. Salvadori","doi":"10.1145/2789116.2789139","DOIUrl":"https://doi.org/10.1145/2789116.2789139","url":null,"abstract":"One of the biggest efforts in designing pervasive Smart Camera Networks (SCNs) is the implementation of complex and computationally intensive computer vision algorithms on resource constrained embedded devices. For low-level processing FPGA devices are excellent candidates because they support massive and fine grain data parallelism with high data throughput. However, if FPGAs offers a way to meet the stringent constraints of real-time execution, their exploitation often require significant algorithmic reformulations. In this paper, we propose a reformulation of a kernel-based gradient computation module specially suited to FPGA implementations. This resulting algorithm operates on-the-fly, without the need of video buffers and delivers a constant throughput. It has been tested and used as the first stage of an application performing extraction of Histograms of Oriented Gradients (HOG). Evaluation shows that its performance and low memory requirement perfectly matches low cost and memory constrained embedded devices.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122896860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Li, B. Goossens, Maarten Slembrouck, Francis Deboeverie, P. Veelaert, H. Aghajan, W. Philips, J. Casar
{"title":"A new 360-degree immersive game controller","authors":"Juan Li, B. Goossens, Maarten Slembrouck, Francis Deboeverie, P. Veelaert, H. Aghajan, W. Philips, J. Casar","doi":"10.1145/2789116.2802652","DOIUrl":"https://doi.org/10.1145/2789116.2802652","url":null,"abstract":"In this demo we present a novel approach that enables to use mobile devices as six degree-of-freedom (DoF) video game controllers. Our approach uses a combination of built-in accelerometers and a multi-camera system to detect the position and orientation of a mobile device in 3D space. The sensor fusion approach is low-cost, accurate, fast and robust. The proposed system allows users to control games with physical movements instead of button-presses as in traditional game controllers. Thus the proposed game controller provides a more immersive gaming experience, letting the users feel that they are the players in the game instead of controlling the players. Compared to other accelerometer-based game controllers, the proposed system also detects the yaw angle, allowing the controller to work as a pointing device. Another featured strength of this design is the ability to provide 360-degree gaming experiences.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114983953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Imran, M. O’nils, Victor Kardeby, H. Munir
{"title":"STC-CAM1, IR-visual based smart camera system","authors":"Muhammad Imran, M. O’nils, Victor Kardeby, H. Munir","doi":"10.1145/2789116.2802649","DOIUrl":"https://doi.org/10.1145/2789116.2802649","url":null,"abstract":"Safety-critical applications require robust and real-time surveillance. For such applications, a vision sensor alone can give false positive results because of poor lighting conditions, occlusion, or different weather conditions. In this work, a visual sensor is complemented by an infrared thermal sensor which makes the system more resilient in unfavorable situations. In the proposed camera architecture, initial data intensive tasks are performed locally on the sensor node and then compressed data is transmitted to a client device where remaining vision tasks are performed. The proposed camera architecture is demonstrated as a proof-of-concept and it offers a generic architecture with better surveillance while only performing low complexity computations on the resource constrained devices.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125969054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Person re-identification via efficient inference in fully connected CRF","authors":"Jiuqing Wan, Menglin Xing","doi":"10.1145/2789116.2789134","DOIUrl":"https://doi.org/10.1145/2789116.2789134","url":null,"abstract":"In this paper, we address the problem of person re-identification problem, i.e., retrieving instances from gallery which are generated by the same person as the given probe image. This is very challenging because the person's appearance usually undergoes significant variations due to changes in illumination, camera angle and view, background clutter, and occlusion over the camera network. In this paper, we assume that the matched gallery images should not only be similar to the probe, but also be similar to each other, under suitable metric. We express this assumption with a fully connected CRF model in which each node corresponds to a gallery and every pair of nodes are connected by an edge. A label variable is associated with each node to indicate whether the corresponding image is from target person. We define unary potential for each node using existing feature calculation and matching techniques, which reflect the similarity between probe and gallery image, and define pairwise potential for each edge in terms of a weighed combination of Gaussian kernels, which encode appearance similarity between pair of gallery images. The specific form of pairwise potential allows us to exploit an efficient inference algorithm to calculate the marginal distribution of each label variable for this dense connected CRF. We show the superiority of our method by applying it to public datasets and comparing with the state of the art.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122254019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}