{"title":"The Sentinel Problem for a Multi-hop Sensor Network","authors":"D. Marinakis, S. Whitesides","doi":"10.1109/CRV.2010.40","DOIUrl":"https://doi.org/10.1109/CRV.2010.40","url":null,"abstract":"In the context of a multi-hop sensor network alarm application, we define the Sentinel Problem: How can a network of simple devices with limited communication ability signal the occurrence of an event that is capable of disabling the sensors? We present both deterministic and probabilistic methods for solving this problem, and evaluate the methods based on algorithmic correctness, false positive rates, latency, and implementation potential.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116614097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Texture CLassification Using Compressed Sensing","authors":"Li Liu, P. Fieguth","doi":"10.1109/CRV.2010.16","DOIUrl":"https://doi.org/10.1109/CRV.2010.16","url":null,"abstract":"This paper presents a simple, novel, yet very powerful approach for texture classification based on compressed sensing and bag of words model, suitable for large texture database applications with images obtained under unknown viewpoint and illumination. At the feature extraction stage, a small set of random features are extracted from local image patches. The random features are embedded into the bag of words model to perform texture classification. Random feature extraction surpasses many conventional feature extraction methods, despite their careful design and complexity. We conduct extensive experiments on the CUReT database to evaluate the performance of the proposed approach. It is demonstrated that excellent performance can be achieved by the proposed approach using a small number of random features, as long as the dimension of the feature space is above certain threshold. Our approach is compared with recent state-of-the-art methods: the Patch method (Varma and Zisserman, TPAMI 09), the MR8 filter bank method (Varma and Zisserman, IJCV 05) and the LBP method (Ojala et al., TPAMI 02). It is shown that the proposed method significantly outperforms MR8 and LBP and is at least as good as the Patch method with drastic reduction in storage and computational complexity.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134258034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Augmentation of the EKF Structure from Motion with Frame-to-Frame Features","authors":"Adel H. Fakih, J. Zelek","doi":"10.1109/CRV.2010.13","DOIUrl":"https://doi.org/10.1109/CRV.2010.13","url":null,"abstract":"The Extended Kalman Filter (EKF) is still one of the most widely used approaches for small scale Structure from Motion (SFM) and Simultaneous Localization And Mapping (SLAM) problems. However, the EKF does not have the ability to take into account the motion information carried by features matched only between two consecutive frames. This information is valuable because, when used appropriately, it generally enhances the performance of the filter. Two main reasons hinder the direct use of such features in the EKF: their un-initialized 3D location would corrupt the covariance matrix, and the computational cost grows cubically with the number of features. In this paper we present a novel approach to solve those problems. Our approach folds the frame-to-frame information in the filter through a separate update step that can be carried out in linear time. Other advantages of our approach is that it can be introduced to already implemented filters with minimal change. It can be done in a separate thread to further speedup the computation. Additionally, it can be further divided to multiple steps with different sets of features, which permits to reject or accept each step based on some performance criteria and to stay within the budgeted time.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133016055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multispectral Face Recognition in Texture Space","authors":"A. Bendada, M. Akhloufi","doi":"10.1109/CRV.2010.20","DOIUrl":"https://doi.org/10.1109/CRV.2010.20","url":null,"abstract":"This work introduces the use of LBP like texture descriptors for efficient multispectral face recognition. LBP has been widely used in visible spectrum face recognition. This work extend its use to non visible spectrums (active and passive infrared spectrums). Local Binary Pattern (LBP) and Local Ternary Pattern (LTP) descriptors are used. Also a simple differential LTP descriptor (DLT) is introduced. The proposed texture space is less sensitive to noise, illumination change and facial expressions. These characteristics make it a good candidate for efficient multispectral face recognition. Linear and non linear dimensionality reduction techniques are introduced and used for performance evaluation of multispectral face recognition in the texture space. The obtained results show that the use of the proposed texture descriptors permit to achieve high recognition rates in multispectral face recognition.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132606717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Object Inter-camera Tracking with Non-overlapping Views: A New Dynamic Approach","authors":"Trevor Montcalm, B. Boufama","doi":"10.1109/CRV.2010.53","DOIUrl":"https://doi.org/10.1109/CRV.2010.53","url":null,"abstract":"Disjoint inter-camera object tracking is the task of tracking objects across video-surveillance cameras that have non-overlapping views. Unlike the closely related task of single-camera tracking, disjoint inter-camera tracking is difficult due to the gaps in observation as an object moves between camera views. To overcome this problem, appearance profiles of the objects seen in each camera are built and used for matching/tracking across different cameras. This paper proposes a new method that uses multiple features that are dynamically weighed for matching moving objects (people in our case) across cameras. In particular, the Zernike moment shape descriptor has been used together with blob histogram and other features to describe a moving object. Weighting emphasis is given to the better features, based on their stability, reliability and their time in the system (how recent they are). This weighting is used both during appearance aggregation and object comparison. Our experiments with real videos have shown the success of our proposed method even in difficult situations where the cameras used are different in terms of brand, quality and resolution.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114933393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Classification of Operational SAR Sea Ice Images","authors":"S. Ochilov, David A Clausi","doi":"10.1109/CRV.2010.59","DOIUrl":"https://doi.org/10.1109/CRV.2010.59","url":null,"abstract":"The automated classification of operational sea ice satellite imagery is important for ship navigation and environmental monitoring. Annually, thousands of large synthetic aperture radar (SAR) scenes are manually processed by the Canadian Ice Service (CIS) and pixel-level interpretation is not feasible. Trained ice analysts divide SAR images into ”polygon” areas and then identify the number and type of ice classes per polygon. Full scene unsupervised classification can be performed by first segmenting each polygon into distinct regions algorithmically. Since there is insufficient information to assign a sea ice label for each region within an individual polygon, a Markov random field formulation using joint information to label each region in a full SAR scene has been developed. This approach has been successfully applied to operational CIS data to produce pixel-level classified images and is the first known successful end-to-end process for automatically classifying operational SAR sea ice images.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115468332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Meger, Marius Muja, S. Helmer, Ankur Gupta, Catherine Gamroth, Tomas Hoffman, Matthew A. Baumann, T. Southey, Pooyan Fazli, W. Wohlkinger, P. Viswanathan, J. Little, D. Lowe, J. Orwell
{"title":"Curious George: An Integrated Visual Search Platform","authors":"D. Meger, Marius Muja, S. Helmer, Ankur Gupta, Catherine Gamroth, Tomas Hoffman, Matthew A. Baumann, T. Southey, Pooyan Fazli, W. Wohlkinger, P. Viswanathan, J. Little, D. Lowe, J. Orwell","doi":"10.1109/CRV.2010.21","DOIUrl":"https://doi.org/10.1109/CRV.2010.21","url":null,"abstract":"This paper describes an integrated robot system, known as Curious George, that has demonstrated state-of-the-art capabilities to recognize objects in the real world. We describe the capabilities of this system, including: the ability to access web-based training data automatically and in near real-time, the ability to model the visual appearance and 3D shape of a wide variety of object categories, navigation abilities such as exploration, mapping and path following, the ability to decompose the environment based on 3D structure, allowing for attention to be focused on regions of interest, the ability to capture high-quality images of objects in the environment, and finally, the ability to correctly label those objects with high accuracy. The competence of the combined system has been validated by entry into an international competition where Curious George has been among the top performing systems each year. We discuss the implications of such successful object recognition for society, and provide several avenues for potential improvement.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129050487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Bayesian Information Flow Approach to Image Segmentation","authors":"A. Mishra, A. Wong, David A Clausi, P. Fieguth","doi":"10.1109/CRV.2010.46","DOIUrl":"https://doi.org/10.1109/CRV.2010.46","url":null,"abstract":"A novel Bayesian information flow approach is presented for accurate image segmentation, formulated as a maximum a posteriori (MAP) problem as per the popular Mumford-Shah (MS) model. The model is solved using an iterative Bayesian estimation approach conditioned on the flow of information within the image, where the flow is based on inter-pixel interactions and intra-region smoothness constraints. In this way, a localized and accurate Bayesian estimate of the underlying piece-wise constant regions within an image can be found, even under high noise and low contrast situations. Experimental results using 2-D images show that the proposed Bayesian information flow approach is capable of producing more accurate segmentations when compared to state-of-the-art segmentation methods, especially under scenarios with high noise levels and poor contrast.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129865146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jean-Luc Bedwani, Ioannis M. Rekleitis, F. Michaud, E. Dupuis
{"title":"Multi-Layer Atlas System for Map Management","authors":"Jean-Luc Bedwani, Ioannis M. Rekleitis, F. Michaud, E. Dupuis","doi":"10.1109/CRV.2010.34","DOIUrl":"https://doi.org/10.1109/CRV.2010.34","url":null,"abstract":"Next generation planetary rovers will require greater autonomous navigation capabilities. Such requirements imply the management of potentially large and rich geo-referenced data sets stored in the form of maps. This paper presents the design of a data management system that can be used in the implementation of autonomous navigation schemes for planetary rovers. It also outlines an approach that dynamically manages a variety of data content and the uncertainty of the spatial relationship between two maps, in addition the proposed framework provides basic path planning operations through maps, and the correlation of maps in localization operations. Timing results from a rich data set demonstrate the efficiency of the proposed framework. In addition, experimental results on the usage of our Atlas management system by a rover performing autonomous navigation operations are also presented.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131098699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Identity Clustering","authors":"S. Prince, J. Elder","doi":"10.1109/CRV.2010.12","DOIUrl":"https://doi.org/10.1109/CRV.2010.12","url":null,"abstract":"Our goal is to establish how many different people are present in a set of N facial images, and determine the correspondence between people and images. Our approach is Bayesian: in the training phase, we learn a probabilistic generative model for face data. Individual identity is represented as a latent variable in this model, and is constrained to be identical when faces match. We use this model to calculate the likelihood for the whole dataset for each hypothesized clustering: using a process equivalent to Bayesian model selection, we marginalize over the unknown identity variables allowing us to compare models with differing numbers of people. For large datasets, it is not possible to exhaustively examine every possible clustering, and we introduce approximate algorithms to cope with this case. We demonstrate results both for frontal faces, and for face sets containing large pose variations. We present a detailed quantitative evaluation of the results for a standard dataset.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"624 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117085807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}